Episode Transcript
Available transcripts are automatically generated. Complete accuracy is not guaranteed.
Speaker 1 (00:05):
Can a person who is blind come to see through
her tongue? Can a baby be born without ears? What
does it like to have the smell of a dog?
And what does any of this have to do with
airplane pilots or Westworld or potato head. Welcome to Inner
(00:25):
Cosmos with me, David Eagleman. I'm a neuroscientist and an author,
and my fascination for a very long time has been
how brains perceive reality, because the strange part is that
we're not seeing most of the action that's going on
out there. So today we're going to dive into that
and we're going to see how we might expand our perception.
(00:50):
We're built out of really small stuff like DNA, and
we're embedded in a very large cosmos, and we're not
particularly good at receiving reality at either of those scales.
And that's because we've evolved to deal with reality. It's
it's very thin slice in between at the level of
rivers and apples and rabbits and stuff like that. But
(01:13):
even here, at our level of perception, we're not seeing
most of the action that's going on. So take, for example,
the colors of our world, So picture the reds and
blues and greens and purples. These are light waves that
bounce off objects and hit these specialized receptors at the
back of our eyes, and then we perceive these colors,
(01:35):
but we're not seeing all the light waves that are
out there. In fact, what we see is less than
a ten trillionth of the light waves out there. So
if you look at what's called the electromagnetic spectrum, you
have radio waves and microwaves and X rays and gamma rays.
All these are light. They're just different frequencies. These are
(01:58):
passing through your body right now, and you're completely unaware
of them because your biology doesn't come with the right
receptors to pick those up. They are light, but they're
not visible light. There are thousands of cell phone conversations
passing through you right now, and you're completely blind to them. Now,
(02:20):
it's not that these other wavelengths of light are inherently unseeable.
Snakes include some infrared light in their reality, and honeybees
include ultraviolet light in their view of the world. And
of course we build machines in the dashboards of our
cars to pick up on signals in the radio frequency range,
(02:41):
and we build machines and hospitals to pick up on
the X ray range and so on. But you can't
sense any of these things by yourself, at least not yet,
because you don't come equipped with the proper sensors. Now,
what this means is that our experience of reality is
it's trained by our biology, and that goes against this
(03:03):
common sense notion that our eyes and our ears and
our fingertips are just picking up on the objective reality
out there. Instead, what this means is that our brains
are sampling just a little bit of the world. Now,
across the animal kingdom, different animals pick up on different
(03:24):
parts of reality. So take the tick. It's blind and
death and in its little world, the important signals are
temperature and body odor butteric acid, and that's all it
picks up on, and that's how it constructs its reality.
For a fish called the black ghost knife fish, its
(03:46):
sensory world is all about electrical fields and the perturbations
of those fields when it's passing a rock or another creature,
and that's always picking up on. For the echolocating bat,
its reality is constructed out of air compression waves that
bounce off objects and come back to them. So for
these different animals. That's the slice of their ecosystem that
(04:10):
they can pick up on, and that's all they're seeing.
And we have a word for this in science. This
is called the umveldt, which is the German word for
the surrounding world. Now, every animal is very limited in
the umveldt that it can pick up on, but presumably
every animal assumes that it's umveldt is the entire objective
(04:32):
reality that's out there, because why would you ever stop
to imagine that there's something beyond what you can sense. Instead,
we all accept reality as it is presented to us.
So let's do a consciousness raiser on this. Imagine that
you are your family dog, and your whole world is
(04:53):
about smelling. So you've got this long snout that has
two hundred million scent receptors in it, and you have
wet nostrils that attract and trap scent molecules. And your
nostrils even have slits so you can take these big
nose fulls of air. You have floppy ears to kick
up more scent. Everything is about smell for you. So
(05:14):
one day you stop in your tracks with a revelation
and you look at your human owners and you think,
what is it like to have the pitiful little nose
of a human? What is it like when they take
a little feeble nose full of air? How can a
human not know that there's a cat one hundred yards
(05:34):
away or that their best friend was on this very
spot six hours ago. But because we're humans that we've
never experienced that world of smell, we don't miss it
and we don't even think about it. Because we are
firmly settled into our umvelt we don't feel like there's
a black hole of smell that we're missing there. We
(05:55):
think we've got the whole world. But the question is
do we have to be stuck in the umveldt into
which we were born? So as a neuroscientist, I've always
been interested in the way that our technology might allow
us to expand our umveldt and how that's going to
change the experience of being human. So we're already quite
(06:17):
good at marrying our technology to our biology. You may
know this, but there are hundreds of thousands of people
walking around with artificial hearing and artificial vision. The way
this works, for example, with artificial hearing is you have
a microphone and you digitize the signal and you put
(06:37):
an electrode strip directly into the inner ear. Or with
artificial vision, you have what's called a retinal implant, where
you take a camera and you digitize this signal and
you plug an electrode grid directly into the back of
the eye and the optic nerve. Now, as recently as
twenty five years ago, there were a lot of scientists
(06:59):
who these technologies were never going to work. Why. It's
because these technologies speak the language of silicon valley and
zeros and ones, and it's not exactly the same dialect
as our natural biological sense organs. But the fact is
that these technologies work. The brain figures out how to
(07:20):
use the signals just fine. Now, how do we understand that?
The key to understanding this requires diving one level deeper.
Your three pounds of brain tissue are not hearing or
seeing the world around you directly. It's not that your
eyes are piping in light or your ears are piping
(07:40):
sound in. Instead, your brain is locked in a crypt
of silence and darkness inside your skull. All inever experiences
are electrochemical signals that stream in along different data cables.
That's all it has to work with are these little
electrical spikes and chemical releases. It's just a world of
(08:05):
spikes running around in darkness inside there, and in ways
that we're still working to understand. The brain is shockingly
good at taking these signals running around and extracting patterns
into those patterns that as signs meaning, and with that meaning,
you have subjective experience. So the brain is an organ
(08:26):
that converts sparks in the dark into a picture show
of your world. All the hues and aromas and emotions
and sensations of your life. These are encoded in trillions
of signals zipping around in the blackness. So you know
when you watch a beautiful screensaver on your computer screen,
that's just built out of zeros and ones and transistors,
(08:49):
and it's somehow the same thing that's happening with your
experience of the world. Let's understand this just a little
bit more. Imagine that you traveled over to an island
of people who are all born blind, so they all
read by braille. They feel tiny patterns of inputs on
their fingertips. So you watch them read a book and
(09:11):
they're brushing over the small bumps with their fingers and
you watch them laugh and cry at the book they're reading,
and you might wonder how can they fit all that
emotion into the tip of their finger. So you explain
to them that when you read a novel, you aim
these spheres on your face towards visual patterns of lines
(09:34):
and curves on a page, and each of your eyes
has a lawn of cells that catch photons, and in
this way you can register the shapes of the symbols.
And you tell them that you have memorized a set
of rules by which different shapes on the page represent
different sounds. So for each squiggle that you detect with
(09:56):
your eyes, you recite a small sound in your head,
imagining what you would hear if someone were speaking that
out loud. And so the resulting pattern of neurochemical signaling
makes you laugh or cry. You couldn't blame the islanders
for finding your story difficult to understand. How do you
(10:17):
fit all that emotion into two spheres on your head? Okay,
So you or they would finally have to allow something,
which is that the fingertip or the eyeball is just
the peripheral device that converts information from the outside world
into spikes in the brain. And then the brain does
all the hard work of the interpretation. You and the
(10:41):
Islanders would break bread over the fact that in the end,
it's all about the trillions of spikes racing around in
the brain, and that the method of entry simply isn't
the part that matters, because your brain doesn't know and
it doesn't care where it gets the data from. Whatever
information comes in from the outside, it just figures out
(11:03):
what to do with it. And this is a very
efficient kind of machine. It is essentially a general purpose
computing device. It just takes in everything and it figures
out what it's going to do with it. And in
my work, I've proposed that this freeze up mother nature
to tinker around with different sorts of input channels. So
(11:26):
I've argued in my talks and books and papers that
we can send information into the brain via unusual pathways.
And I call this the pH model of evolution. And
I don't want to get too technical here, but pH
stands for potato head, and I use this name to
emphasize that all these sensors that we know in love,
(11:49):
like our eyes and our ears and our fingertips, these
are merely peripheral, plug and play devices, you stick them
in and you're good to go, just like with a
potato head. Where you attach these devices, the brain figures
out what to do with the data that comes in.
(12:09):
And by the way, when you look across the animal kingdom,
you find lots of interesting peripheral devices. So snakes have
heat pits with which they detect the infrared light. And
the black ghost knifefish has electro receptors up and down
its body. That's how it detects the changes in the
electrical field. And there's an animal called the star nosed mole,
(12:33):
which essentially has this nose with twenty two fingers on it,
and it moves around through its three dimensional tunnel system
and feels around and constructs a model of its world
that way. And many birds and cows and insects have
specializations so that they can feel the magnetic field of
(12:54):
the planet. This is called magneto reception, and they navigate
that way. The idea with the potato head model is
that Mother Nature doesn't have to continually redesign the brain
every time she introduces some new peripheral device. Instead, with
the principles of brain operation already established, all she has
(13:17):
to do is worry about designing new peripheral devices to
pick up on new information from the world. So in
the same way you can plug an arbitrary nose or
eyes or mouth into potato head. Likewise, nature plugs all
kinds of instrumentation into the brain for the purpose of
detecting these energy sources in the outside world. Now, the
(13:54):
idea of looking at our peripheral sensors like individuals standalone
devices might seem bizarre, because, after all, aren't there thousands
of genes involved with building these devices, and don't these
genes overlap with other pieces and parts of the body.
Can we really look at the nose or the eye,
(14:16):
or the ear or the tongue as a device that
stands alone. So I started studying this question because I thought,
if the potato head model is correct, wouldn't that suggest
we might find switches in the genetics that lead to
the presence or absence of a peripheral device. And as
it turns out, that's precisely what can happen. So, for example,
(14:39):
some babies are born completely missing a nose, and they
also lack the nasal cavity and the whole system for smelling.
This is called a rhinia. Now, these kind of mutations,
they seem startling and difficult to fathom, But in our
plug and play framework, a rhinia is predictable. With a
slight tweak of the genes, the peripheral device simply doesn't
(15:03):
get built. Or consider other babies who are born normal,
but they have no eyes. This is called anophthalmia, and
others are born without tongues. Some babies are born without ears,
that's called anotia. Some children are born without any pain receptors,
and more generally, others are born without any touch receptors.
(15:24):
This is called nafia. And so when we look at
these situations, it becomes clear that our peripheral detectors unpack
because of specific genetic programs. And if you have a
minor mauthfunction in the genes, that can halt the program,
and then the brain just doesn't get that particular data
(15:45):
stream of information from the world, whether that's smell, molecules,
or photons or air compression waves or touch or whatever.
For me, the lesson that comes together here is that
nature designs ways of extracting information from the world world,
and these unpack with their own little genetic instructions. Now,
what this implies is that there's nothing really fundamental about
(16:09):
the devices that you and I come to the table
with our eyes and our ears and our nose and
our fingertips. It's just what we've inherited from a complex
road of evolution. But that particular collection of sensors might
not have to be what we stick with, because the
brain's ability to decode different kinds of incoming information implies
(16:32):
the crazy prediction that you might be able to get
some sensory cable going into the brain to carry a
different kind of sensory information. For example, what if you
took a data stream from a video camera and converted
that into touch on your skin. Would the brain eventually
be able to interpret the visual world simply by feeling it?
(16:56):
And this is the stranger than fiction world of sensory substitution.
Sensory substitution refers to the idea of feeding information into
the brain via unusual sensory channels and the brain just
figures out what to do with the information. Now, that
might sound speculative, but the first paper demonstrating this was
(17:19):
published in the journal Nature in nineteen sixty nine. There
was a scientist named Paul Baki Rita and he put
blind people in a modified dental chair and he set
up a video feed and he would put something in
front of the camera and then the person would feel
that poked into their back with a grid of solenoids.
(17:42):
So if he put a coffee cup in front of
the camera, they would feel the shape of a coffee
cup in their back. Or he puts a telephone in
front of the camera, and they feel a telephone in
their back. And amazingly, people who were blind got pretty
good at being able to determine what was in front
of them the camera, just by feeling it in the
(18:02):
small of their back. So Baki Rita summarized his findings
by saying, quote, the brain is able to use incoming
information from the skin as if it were coming from
the eyes end quote. The subjective experience for the blind
people who are feeling this in their back was that
visual objects were located out there instead of on the
(18:27):
skin of their back. In other words, it was something
like vision. And think about it this way. When you're
at the coffee shop and you see your friend waving
at you across the way, the photons from your friend
are impinging on your photoreceptors in your eye. But you
don't perceive that the signal is at your eyes or
in your brain. You perceive that your friend is out
(18:48):
there waving at you from a distance. And so it
goes with the users of baki Rita's modified dental chair.
They were perceiving the object out there now. Amazingly. While
baki Rita's device was the first to hit public consciousness,
it was not actually the first attempt at sensory substitution.
(19:10):
On the other side of the world, at the end
of the eighteen nineties, a Polish ophthalmologist developed a crude
device for people who were blind. He put a single
photo cell on the forehead of a blind person, and
the more light that hit it, the louder a sound
would be in the person's ear, so based on the
(19:33):
sound's intensity, the blind person could tell where there were
lights or where there were dark areas. Unfortunately, the whole
device was very large and heavy, and of course it
was only one pixel of resolution, so it never got
any traction. But in nineteen sixty another group in Poland
picked up the ball and ran with it. They recognized
(19:54):
that hearing is critical for the blind, so they turned
to passing in the light information via touch. They built
a helmet that had all these vibratory motors in it,
and they essentially drew the images on the head, and
blind participants were able to move around in these specially
(20:15):
prepared rooms that were painted to enhance the contrast of
the door frames and the furniture edges. It worked. Unfortunately,
it was also heavy and would get very hot, and
so the world had to wait. But the proof of
principle was starting to emerge. Now, why did these strange
approaches work. It's because input to the brain, whether that's
(20:39):
from photons of the eyes, or air compression waves of
the ears, or pressure on the skin, they're all converted
into the common currency of electrical signals. So as long
as the incoming spikes carry information that represents something important
about the outside world, the brain will learn how to
(21:00):
interpret it. The vast forests of your brain cells in
the dark, they don't care about how the spikes get there.
They just do their work on it. Now, there have
been all kinds of incarnations of sensory substitution for the blind.
One also from the nineteen sixties, is called the sonic glasses.
(21:20):
It takes a video feed right in front of you
and turns that into a sound landscape. So as things
move around and get closer and farther, it sounds like
it sounds like a cacophony. But after some time, blind
people start getting really good at understanding what is in
(21:42):
front of them just based on what they're hearing through
their ears. And the best example of this is a
program that you can download on your cell phone called
the Voice. Note that the three middle letters are oh,
I see anyway. This is developed by an engineer named
Meyer in the Netherlands, and it started as a bulky project,
(22:04):
but it can now be downloaded on your phone. You
point your phone camera at things and the program converts
what the phone sees into sounds. The app is amazing
and you can download this onto your phone and start
walking around in the world with it and really understand
what's going on when you convert site into sound. And
(22:25):
my colleagues all over the world, like Jamie Warren and
emirah Medi, have been running science experiments on these sorts
of approaches. And by the way, the century substitution doesn't
have to be through the ears. Another version is called
the brain port, and this is a little grid. It's
called an electro techtile grid. It sits on your tongue
(22:47):
and gives little shocks. So you have a camera and
that video feed gets turned into these little shocks on
your tongue. It feels like pop rocks in your mouth.
And blind people can get so good at using this
that they can throw a ball into a basket where
they can navigate a complex obstacle. Course they can come
to see through their tongue. Now that sounds completely insane, right,
(23:13):
but remember all vision, ever is, are these electrical signals
coursing around in your brain. Your brain doesn't know where
the signals come from, it just figures out what to
do with them. So my laboratory set out some years
ago to solve sensory substitution for people who are deaf,
(23:34):
and we wanted to make it so that the sound
from the world gets converted in some ways so that
a deaf person can understand what is being said. So
with my graduate student, Scott Novic, we built a vest.
Now this is not a normal vest. This is a
vest that zips up tight around the torso and it
has thirty two little motors on it. And these are
(23:58):
vibratory motors like the buzzer on your cell phone, but
thirty two of them, and they're distributed pretty evenly around
your waist in your back, and each motor represents a
different frequency of sound from low to high. And by
breaking up sound in this way, this is the same
thing that your inner ear does a part of your
(24:20):
interear called the cochlea, So we have essentially transferred the
cochlea to the torso, so it captures sound and turns
that into these patterns vibration. So some years ago we
started to test this in conjunction with the deaf community.
Our first participant was a guy named Jonathan. He was
thirty seven years old. He had a master's degree, and
(24:42):
he had been born profoundly deaf, which means there was
a part of his umvelt that was unavailable to him.
So we had Jonathan wear the vest and train with
it for four days, two hours a day, and by
the fifth day he was pretty good at identifying the
words that were being said to him. So you say
(25:03):
the word dog, and Jonathan feels a pattern of vibrations
all over the vest, and his job is simply to
write on the dry erase board what he thinks the
word might have been, and by day five, he could
get this mostly right. Now. We had trained him on
a limited number of words, what's called a closed set,
but when we switched to a new set of words,
(25:24):
once he had never heard before, he was able to
perform well above chance. And he learned more and more
quickly with every new set. And this suggested he wasn't
just memorizing some answers. He was actually learning how to
hear with the vest. He was translating the complicated pattern
of vibrations into an understanding of what was being said. Now,
(25:49):
he wasn't doing this consciously, because the patterns are too
complicated for that, but his brain was unlocking the meaning
of this. And by the way, this is just like you.
So listening to this podcast, you're not thinking, oh, Eagelman
is saying some high frequencies and now some low and
some medium, so that must be a s sound. Instead,
(26:10):
You've just practiced hearing your whole life, and eventually you
become pretty good at using your ears and your brain.
But when you were born, you didn't know how to
use your ears, but your brain looked for correlations, things
that went together. So you would watch your mother's mouth
moving and you get spikes coming down your auditory nerve,
(26:31):
and you figure out that those go together. Or as
a baby, you clap your hands and you get a
different pattern of spikes coming down your auditory nerve. Or
you bang on the bars of your cage, or you
babble with your mouth, and these all correlate with particular
patterns coming in along this nerve, and eventually these patterns
become what philosophers call a qualia, which is a private,
(26:55):
subjective experience of hearing. You don't have to think about
what all the spikes mean. They just get translated into
a direct perceptual experience. Okay, so back to the vest.
So we tested the vest with lots of participants in
the deaf community, and in fact, we even built a
miniature vest because it turned out that one of the
(27:17):
people we were working with had a daughter who was
born deaf and blind. So we made this little miniature
vest for her and it picked up on the sounds
of the world and translated this into patterns of vibration
on her skin. And so her grandmother took her around
the lab and touched her feet on things and said,
this is hard, this is soft, this is going up,
(27:40):
this is going down, and so on, and this allowed
the little girl to tap into a larger part of
her umveldt. Then we made a smaller version of the vest,
just a chest strap, and we began testing that with
some other children. But eventually we were able to shrink
the whole system down to a wristband, and that opens
(28:01):
up the technology for a much larger population, and we
spun this off of the lab as a company called
Neo Sensory, and one of our first users was a
wonderful guy here in the San Francisco Bay area named
phil and we videoed him talking in sign language about
what the wristband meant to him. So I'm going to
(28:23):
quote him here as a translation from the sign language used.
He signed quote, it makes me feel a natural connection
with everyone around me. Sometimes I perceive wow. I can
tell what a sound feels like if someone calls my name,
or if there's some kind of noise nearby, or my
dog's barking, or even my wife calling me from far away. Philip,
(28:46):
I feel her call my name and I go to her.
So we tested lots of people who were deaf in
the Bay area, and people reported things to us like
I'm picking up on running water or birds or the
oven timer. And when wearing it at work, I had
a really good experience, like when people were talking in
the room, I could feel what they were saying and
(29:07):
it helped me lip read better. And as a quick
side note, we went to interview lots of people who
are deaf, and I came to understand that lots of
deaf people live in nice apartments in one particular location,
which is right next to the railroad track, because the
sound of the howling trains passing by doesn't register with
(29:28):
them and bother them, so they can live comfortably in
a steeply discounted apartment that's perfectly nice. But people who
are hearing don't want that apartment. Anyway, back to the story,
users started telling us that they were picking up on
things that they didn't even know existed, like that microwaves beeped,
or that their car blinker made a clicking sound, or
(29:50):
that if they accidentally left the air blower on at work,
that it was making a noise, or for that matter,
the loudness of toilets flushing, or the they had left
the sink running. And they started feeling things like the
laughter of their children on their skin. And they were
able to distinguish which child was talking and which of
(30:10):
their dogs was barking. And with time, people just get
better and better at picking up the sounds of the
world as patterns of vibration on their skin. And with
one of our users, I asked him, what was it
like when he hears the dog bark? Does he register? Oh,
there were just vibrations on my wrist, and so now
I have to translate that that must have been a
dog barking. And he said, no, I just hear the
(30:34):
dog barking out there, which sounds crazy, right, but remember
that's all that's going on with your ears. You hear
the sound out there even though it's actually happening in
here in your head. Now, after we were years into
(31:05):
this project, I began to discover that the idea of
converting touch to sound is not even new. I found
a paper from nineteen twenty three. There was a psychologist
at Northwestern University called Robert Galt, and he heard about
a deaf and blind ten year old girl who claimed
to be able to feel sound through her fingertips. So
(31:28):
he was skeptical, and so he ran an experiment. He
stopped up her ears and wrapped her head in a
woolen blanket, and he put her finger against the diaphragm
of a device which converted his voice signal into vibrations.
So Galt sat in a closet and spoke to her
through the device, and so her only chance to understand
(31:49):
what he was saying was from the vibrations on her fingertip.
And what he reported is that it worked. She was
able to tell what he was saying through her fingertips.
And in the early nineteen thirties, and educator at a
school in Massachusetts developed a technique for two deaf and
blind students. Being deaf, they needed a way to read
(32:12):
the lips of speakers, but they were blind as well,
so that couldn't work. So the technique consists of placing
a hand over the face and neck of the person
who is speaking, so the thumb rests lightly on the
lips and the fingers fan out to cover the neck
and cheek, and in this way they can feel the
(32:33):
lips moving and the vocal courts vibrating, and even the
air coming out of the nostrils. And by the way,
because these two original students were named Tad and Oma,
this technique is known as the Tadoma technique, and thousands
of deaf and blind children have been taught this method
and they can obtain proficiency understanding language almost to the
(32:56):
point of those with hearing. So the key thing to
note for our purposes is that all the information is
coming in through their sense of touch. And in the
nineteen seventies the death inventor Dmitri Konewski came up with
a two channel vibrotactile device, one of which captures the
(33:16):
low frequencies and the other the high frequencies, and these
two vibratory motors sit on the wrists. And in the
nineteen eighties some other people came up with things like
this too, which all demonstrated the power of sensory substitution.
The problem was that all these devices were too large,
and they typically just had one motor or two motors,
(33:38):
and they got too hot, and it was not practical
for people to wear these. It's only now that we're
able to capitalize on a whole constellation of tech advances
to run this in a wristband in real time. And
so I'm really happy to say that the neosensory wrist
band is now on risks all over the world. And
(33:58):
what's cool is that this technology is a game changer
because the only other solution for deafness is a cochlear implant,
and that's something that requires about one hundred thousand dollars
and an invasive surgery. But the riskband we can build
for one hundred times cheaper, and that opens up the
technology globally, even for the poorest countries in the world.
(34:20):
And that's one of the reasons we've been able to
get this into underfunded schools for the deaf all over
the globe, and we've had many wonderful philanthropists help us
do that because this is such a different scale of
solution that's simple and inexpensive and takes advantage of a
very strange principle of the brain sensory substitution. And we've
(34:44):
just released something else that's having real impact. It's a
version of the same idea, but it's not for people
who are deaf, but instead people who are having normal
age related hearing loss, which almost always happens in the
high freaquent and c which is why people who are
getting older and losing hearing start having a harder time
(35:06):
understanding women and children because their voices tend to be
at a higher frequency. So we develop cutting edge machine
learning that sits on the wristband and listens in real
time just for the high frequency parts of speech. So
for example, it just listens for an S or a
Z or a B or a K, and the wristband
signals in different ways each time it hears one of
(35:29):
those speech sounds. And so the key is when you're
losing your high frequency hearing, your ears are still doing
fine at the medium and low frequencies. Those are getting
to the brain. The risk band is just clarifying what's
happening at the high frequencies, and your brain learns to
fuse these signals from your ear and from your skin,
(35:51):
so it puts together what it heard from the ear
with what it's getting through the wristband, and after a
few weeks people develop much clear were hearing. And as
an interesting side note, people don't always notice that they're
getting better, but everyone around them does, and if they
forget to put on the wristband, they get yelled at.
(36:11):
So that's an example of pushing some information into the
brain via an unusual channel while most of the information
is coming in the normal way. And I'll also tell
you something else amazing that we found, which is that
the wristband works incredibly well for reducing tonitis, which is
ringing in the ears. So a couple of research labs
(36:31):
had previously shown that tonitis can be reduced from something
called bi modal stimulation, which just means that you have
sounds and you have touch that are synchronized. So that's
two modes or by modal. Now, the previous research had
done this by combining tones beppep with shocks on the tongues,
(36:52):
and that worked to drive down the ringing in the ears.
So we did the same thing with the wristband and
it works the same. We've published our data on this
that people with tonightis get clinically significant improvement. Now, why
does something like that work. There are some sophisticated arguments
and debates about why this works, but I think this
(37:15):
simple explanation is that we're just teaching the brain what
is a real external sound, because those get confirmation on
the wristband when you hear boo boo boo boop you're feeling,
But the tonitis, the internal gets no verification on the wrist,
and so the brain figures out that's fake. News and
(37:37):
it drives it down. Now, we're doing all kinds of
other experiments using the wristband for sensory substitution. So, for example,
we've begun to study this as a device for balance.
So there are many people who have problems with balance
because of their inner ear. They don't realize when their
body is tilting. So in our experiments, they wear the
(37:59):
risk band and they also wear a small collar clip,
and the collar clip has a motion detector and a
gyroscope in it, and it can detect your orientation whether
you're standing straight or you're tilting one way or another,
and it just sends that information to the wristband, so
you become aware if you're tilting and you know in
which direction. So it goes BIZ when you tilt and
(38:21):
BZ when you turn the other way. And this is
simply taking what your inner ear would normally do, and
if there's something wrong with it, it's just sending it
in through a different channel. And beyond deafness and balance,
we're doing other things like working with prosthetics. So when
somebody gets an amputation, they get an artificial leg prosthetic.
(38:42):
And what we did is we put sensors on the
leg so that you can feel the information on the wristband.
So we're taking an artificial limb and by putting angle
and pressure sensors on it, we are restoring the sensory
input that you would have from it through wrist and
that allows patients to learn much more quickly how to
(39:04):
walk with their new prosthetic limb. Now, beyond sensory substitution,
how can we use a technology like this to add
a completely new kind of sense to actually expand the
human umvelt? For example, could we feed real time data
from the internet directly into somebody and could they develop
(39:27):
a direct perceptual experience. So some years ago we did
an experiment in the lab where a participant feels a
real time streaming feed from the net of data for
five seconds, and then he's holding a tablet and two
buttons appear and he has to make a choice. He
doesn't know what's going on, but he makes his choice
(39:49):
and then gets feedback after a second and a half. Now,
here's the thing. The subject has no idea what all
these patterns mean, but we're seeing if he can get
better at figure out which button to press. And he
doesn't know that what we're feeding is real time data
from the stock market, and he's making buy and sell decisions,
(40:10):
and the feedback is telling him whether he did the
right thing or not. And what we're seeing is can
we expand the human umvelt so that he comes to
have a direct perceptual experience of the economic movements of
the planet. Here's another experiment which I showed it teds
some years ago in a talk. We scrape the web
(40:31):
for any hashtag and we do an automated sentiment analysis,
which means are people using positive words or negative words
or neutral and we feed that into the vest or
the wristband. And this allows a person to feel what's
going on in the community of millions of people and
to be plugged into the aggregate emotion of giant crowds
(40:54):
all at the same time. And that's a new kind
of human experience because you can you can't know normally
how a population is feeling. It's a bigger experience than
a human can normally have. And we're working on feeling
signals that exist out there but are normally invisible to you.
So imagine that instead of a police officer having to
(41:15):
have a drug dog, they could instead feel the odors
around them that they normally couldn't. So imagine building an
array of molecular detectors and instead of needing the dog
with its huge snout, they can just directly experience that
level of smell themselves through vibrations on the skin. And
(41:38):
we're doing things with robotic surgery. So normally, when a
surgeon is doing a robotic surgery, they have to keep
looking up at the monitors to understand what's going on
with the patient. But imagine being able to simply feel
the data from the patient, the heart rate and the
breathing and so on, simply feeling it as you're going
(41:58):
and not needing to keep looking at the monitors. Another
thing we've been working on for a while is expanding
the umvelt of drone pilots. So in this case, we
have the vest streaming nine different measures from a quad copter,
so the pitch and yaw and roll and orientation and heading,
and that improves the pilot's ability to fly it because
(42:21):
it's essentially like the drone pilot is extending his skin
up there onto the drone far away. He's becoming one
with the drone. He can learn how to fly it better.
In the fog or in the darkness, because essentially he
is becoming one with the drone. Or something that's related
to this is imagine taking a modern airplane cockpit which
(42:44):
is full of gages and instead of trying to read
the whole thing, you just feel it. Because we live
in a world of information now and there's a difference
between accessing big data and experiencing it. And we're also
exploring to expand your body to a different location. So
(43:04):
imagine that you feel everything that a robot feels. So
you send an avatar robot on a rescue mission into
a place that's very dangerous, like after an earthquake, with
collapsed buildings and dangerous chemicals, and you feel what the
avatar robot is feeling. So you can close the feedback
loop between action and perception. And we're interested in using
(43:28):
this for the military to reduce friendly fire, which is
when a person gets killed just because one of their
colleagues makes a mistake and shoots them. So with our
chest strap and some encrypted position information, you can tell
where your friendlies are in any moment because you're feeling them.
You know their location right on your body, like Fred
(43:50):
is off to my left because I can feel a
slight vibration, but now he's getting closer to me, so
the vibration gets more intense. And now I know that
Steve is behind the wall over there because I can
feel him moving around even though I can't see him,
and Tom is behind me back there. You don't have
to rely on vision because you're feeling where everyone is.
(44:12):
So with one of our engineers, Mike Perata, we built
a version of this and we demonstrated it by turning
to fiction. We had our vest make a cameo on
the show Westworld. So if you saw season two, episode seven,
the storyline is that private military contractors drop into Westworld
(44:34):
to take care of these out of control robots called
the Hosts. And we set this up so that the
military contractors in the show are wearing our vests that
let them feel the location of the Hosts on their bodies,
and that's how they know exactly how to target them.
So as they're moving around, they can feel, oh, there's
(44:55):
a robot over there, and there's a robot on the
other side of that thing, and there's a robot the
dark over there, and they can aim at them appropriately. Now,
as it turns out, all the military contractors eventually get killed,
so the vest is not necessarily going to save your
life if things really hit the fan with robot consciousness,
but that's a different episode, and we've used this same
(45:16):
concept for people who are blind. We set this up
in collaboration with some colleagues at Google who have light
ar in their offices. Light ar is like sonar, but
with light and so with light ar you can know
the location of everything and everybody moving around in the offices,
and we tapped into that data stream and we brought
(45:37):
in blind participants and they could feel where everyone was.
So if there's someone on your right, you feel a
vibration on your right, and as they get closer, it
gets more intense than as they go away it gets
less intense, and you can feel them moving around you
and you can even feel when they're walking around behind you,
which is better than sighted vision. And on top of that,
(45:59):
we also added navigation. So our participants had never been
to these offices before, but we type into the system
a particular conference room to go to, and the person
then feels on their vest a buzzing on the front,
so they walk straight and then they feel a buzz
on their left, and they turn left and then they
feel a diagonal buzz and they know that the conference
(46:21):
room is diagonally over there, and they were able to
navigate this way on top of feeling who is around them,
and so in this way, they're not getting real vision,
but they're getting a lot of incredibly important information in
a very simple way. And there's really no end to
the possibilities on the horizon with sensory substitution and sensory expansion.
(46:46):
One experiment we did involves using these smart watches that
can measure things like your heart rate and hurried variability
in galvanic skin response, and so we tapped into the
API for that and we put the data on the internet,
and then you feel that on the wristband, so you
can feel these normally invisible states of your body. But
(47:07):
the interesting part is when you take the watch off
and give it to someone else, let's say your spouse,
so that now you are feeling the physiologic responses of
another person. You're tapped into their internal signals. Now, I
have no idea if this is good or bad for marriages,
but this is an experiment we're trying because humans are
(47:29):
at a point now where we can open up new
folds in the possibility space. There are things we can
experiment with to have new kinds of senses and bodies,
and we can feel things like not only other people's physiology,
but things like entire factories or traffic patterns. In general,
what this gives us is a new approach to data.
(47:53):
Our visual systems are fundamentally really good at blobs and
edges and motion, but they're limited in what they can
attend to. They can only do one thing at a time,
and that's not very good for high dimensional data. But
your body is very good at multidimensional data, which is
why you can balance on one leg and you're getting
feedback from all these different muscle groups. You're taking in
(48:15):
high dimensional data and dealing with it all at once
and with the right sorts of data compression. I think
there's no limits to the kind of data that we
can take in. We have about seventy different experiments running
on this, and if you're interested, go to neosensory dot
com slash developers and you can see all the various
(48:36):
cool projects that we in the community in general has done.
So the possibilities are endless here. Just imagine an astronaut
being able to feel the overall health of the International
Space Station, or for that matter, having you feel the
invisible states of your own health, like your blood sugar
(48:57):
and the state of your microbiome, or having three one
hundred and sixty degree vision or seeing an infrared or ultraviolet.
So the key is this, as we move into the future,
we're going to be increasingly able to choose our own
peripheral devices. We don't have to wait for Mother Nature's
(49:17):
sensory gifts on her time scales, because instead, like any
good parent, what she's given us are the tools that
we need to go out and define our own trajectory.
So the question now is how do you want to
experience your universe? That's all for this week. To find
(49:43):
out more and to share your thoughts, head over to
eagleman dot com slash podcasts. Any questions or discussions that
you have please email podcasts at eagleman dot com and
I will be addressing those on future episodes. Until next time,
I'm David nigelm In signing off to you from the
Inner Cosmos.