Episode Transcript
Available transcripts are automatically generated. Complete accuracy is not guaranteed.
Speaker 1 (00:00):
Brought to you by Toyota. Let's go places. Welcome to
Forward Thinking. Welcome everyone to Forward Thinking, the podcast where
we talk about the future. I'm Jonathan Strickland, I'm Lauren
Voge OBAM, and I'm Joe McCormick. Joe the Possessor of
(00:22):
the Acts of Destiny. Um, we we now have an
official Forward Thinking plastic toy axe that was discovered in
the House to Works office toy box. Yeah, so now
it's ours. I don't know what we'll do with it
or why it's forward thinking, but we may do an
episode on the future of axes and ax text technology.
(00:43):
That's good. That's good of of plastics. Yeah, plastic ac
is in particularly quite different future than that's true. Well,
that's not what we're talking about today though. Today we're
talking about brain computer interfaces. Uh what now, you looking
at me like what I want to talk about axes? No,
absolutely not. I have no idea what we talked about that. Okay.
(01:03):
Brain computer interfaces is what we're going to concentrate on today.
And uh, and really it's exactly what it sounds like.
It's an interface that allows you to interact with a
computer using just your brain power. That sounds like science fiction.
Well at one point it was just science fiction, but
now it's science fact and in fact there we've made
a lot of progress in this field. There. Essentially, the
(01:27):
way this works, you have to understand the way your
brain works. You know, you have neurons in your brain,
and the neurons communicate, They pass along messages and commands
to the various systems in your body using a combination
of electric signals and chemical processes. Right, and all these
neurons in your brain are connected by things axons and dendrites,
(01:49):
and they and they create complex patterns of electrical charges
that surge back and forth. And this is what manifests
as thought right and uh, and also as a command,
So like when you lift your arm, you know, or
as Buck Rubonds would say, throw a harpoon. Um. So yeah,
I mean the the neurons, because they are communicating through
(02:12):
partially through electricity, not solely through It means that if
we had found a way, if we could find a
way to detect electrical activity and to assign particular types
of electric electrical activity to particular tasks, we could interpret
that electrical activity and translate that into something that a
computer could understand like a command. Right. Yeah. Well, the
(02:34):
interesting point this goes all the way back to was
the first time that someone noticed that the brain makes
this electrical current, a British physician named Richard Cayton, I believe,
And uh yeah, so so this this has been this
is not a new idea, no, no, no, it's certainly
a fantastic one, right, Yeah, The whole idea of being
able to harness computer power is relatively new. But the
(02:55):
fact that we have we've learned about the electrical activity
in the brain, We've not about that for a while, right.
And so essentially, what what your brain is doing is
it's producing meaningful action based on electrical impulses, which is
actually the same thing a computer does, right, yeah, yeah,
electrical impulses manifest as meaningful action. Yeah. In that case,
(03:15):
the electrical impulses tend to be on very tiny microprocessors
and going through extremely tiny channels on these microprocesses we're
talking about on the nano scale. Uh. And so if
we were able to to detect and interpret these these
electrical patterns that go on in our brains in a
meaningful way, we could then create an interface with a
(03:38):
computer and pass commands through thought and in fact, yeah,
pretty crazy, but it's awesome. But how do we read that?
I mean, how do how do you figure out what's
going on inside someone's skull? Well, it's actually simpler than
you might think. It's it's it's difficult, and it's it's simpler.
At the same time, it's um So your brain is
producing electrical impulses and in a way, it's not all
(04:02):
that different than the data you'd create by punching keys
on a keyboard or moving around the mouse. When your
brain thinks a thought, something physical happens. If you have
a way of detecting what that physical thing is that happens,
then you have a way of creating input. All you
have to have is some kind of sensor or machine
(04:25):
or way of gathering the input from the brain. That's
the part that's hard. Now, that's one part that's hard
is actually exactly teaching teaching the computer or the or
the person or the robotic limb. You have to teach
how too, how to interpret a thought into an action,
(04:50):
and that there's a part of a challenge of that
is that two people could go into the same lab
to have their their thoughts analyzed Essentially, you can get
an m R I machine, for example, and look at
the brain activity and and you tell the person inside
the machine, all right, I need you to think about
lifting your left arm. And then you watch the parts
(05:10):
of the brain that light up, and you say, all right, well,
those are the parts of the brain that I want
to concentrate on, because I'm designing the system so that, uh,
the person will be able to control a robotic arm
that would be in the place of their left arm.
So you think, but then you get a second person
in there and they think about it, and it's gonna
be slightly different places, so it's not like one size
(05:31):
fits all. Well, let's make this concrete. Let's talk about
what we're actually talking about doing in terms of the technology. Okay, um,
so what are some of the ways that people think
they can actually detect electrical impulses in the brain and
and do something useful with them. Well, the the least
invasive and the least expensive, and the uh probably the
(05:51):
easiest would be the e G. Right. Well, first of all,
you already mentioned m R I, and m R I
can do this, but it can't do it on the
go right. No, right machines are very large and not magnets,
and then m r I is really better used to
again determine the best places to put an electrode to
detect these electrical impulses. So you kind of use it
(06:12):
in the testing phase, not in the execution phase. Right,
The m R I U seems very useful for UM
figuring out what you need to do, but not for
real time control UM. But so yeah, you mentioned the
e g. The electro and cephalogram um and this this
goes back by the way, a German neurology names Hans
Burger found the way to read current by electronic placement, honestly,
(06:36):
so they put electrodes on the scalp itself, which detect
any electrical activity. Wait a minute, though, you'd have to
think that if you're just putting electrodes all over your head,
that's got to be a really high noise to signal ratio.
That yeah, because your skull actually does block some of
the electrical signals and also distorts some of them, which
means that if you want to talk in terms of resolution,
(06:59):
you would have low resolution signals, meaning that uh that
you would get you'd be able to detect activity, but
you would your accuracy would be fairly low. So be
like taking directions from somebody who's on a cell phone
inside a bomb shelter underwater. Yeah, yes, it's exactly like that.
Or as I was going to say, maybe like if
(07:20):
you were to put the electrodes on your scalp and
the the let's say that the hardware and software that
you're working with allows you to move a cursor by
concentrating on it on a screen. If you're wearing the
electrodes on your scalp, then it may not be that accurate,
and it may require you to to spend a lot
more time concentrating and trying to move that cursor than
(07:43):
it would if you were to have a more invasive procedure. Right.
So here we come to the other and perhaps most
promising technology, right is the the surgically implanted electrode. Right.
So this in this case, you have you have gone
and you've actually removed part of the skull and gone
(08:03):
into the brain. Or you may just put the electrode
on the surface of the brain, but there are also
procedures where in order to get to specific parts of
the brain im you actually implant it directly into the brain.
This also has some problems. Obviously, one being invasive surgery.
There's always worst there and and it's you know, that's
(08:24):
a big decision. Uh. The second problem is, or potential problem,
is that it can cause over time, scar tissue can
form over the electrode, which will create that sort of
insulating problem that the e G has where you're going
to get signals blocked or distorted because of the scar tissue. Uh.
And then also until recently anyway, it also meant that
(08:47):
you were you had to be tethered to or Yeah,
you would have essentially a connection point that would extend
out from your skull and you would have to connect
a wire to that, and that wire would be connected
on the other end to whatever device it was that
you were controlling, whether it was a computer or a
robotic arm or whatever. Though this problem has recently been
(09:09):
somewhat conquered. Um, we can talk about that in a minute.
So okay, so we're talking about neural implants. An electrode
a little piece of metal. It's a chip that goes
in your brain and it takes those electrical impulses and
it sends them to a computer. But we have to
think that uh, it can't be that easy, right, It
(09:30):
can't just send the electrical impulse to the computer. There
has to be really advanced software r right, it understands
how to deal with these impulses, or based on an
extremely uh complicated set of probabilities, can figure out that
that little blippy block probably means this thing and as
(09:51):
opposed to that little blippity blob, which generally speaking, the
way that I understand that this works is that it
requires a long session or multiple sessions with scientists and
doctors to go through the process of saying, all right, uh,
we want let's say, let's say given, we'll give an
EXAMPLETS say it's a robotic arm and uh and they say, well,
(10:14):
we want you to uh to lift your left arm.
And let's say it's a person who is fully capable
of doing this, And so the person lifts their left
arm and they interpret those signals and they say, all right,
these are the signals we want. These are the signals
that say lift the left arm for the robotic arm.
And you repeat that test many, many, many, many many
times to get as accurate a picture as possible of
(10:37):
the neural activity that's going on when you lift your
left arm. And then you eventually say this represents the
input for the robotic arm to raise up. So then
the next time you lift your left arm and you're
hooked up to this system, the robot arm also lifts
it arm it's arm, and you have to do the
same thing for all the different commands, so things like
(10:59):
gripping or twisting your wrist unless you've actually designed the
robotic arm to have some of these features automatically. I
saw one video where the robotic arm would automatically grip
something if it if the palm came into contact with
an object, so that you didn't have to go that
extra step to think, make the fingers close at a
certain point. Yeah, the sensor is easier to produce than
(11:21):
the brain transfer. That makes it hard to do like
an open hand slap though, Yeah, you just griped the
person to challengee instead pinching their cheek. Yeah yeah, so
so so don't future problems duel of the future. I
would say that if my cheek were pinched by robotic arm,
I would consider myself chagrined. Yeah, but anyway, the the
(11:42):
would defend your honor and you'd have to I would
demand satisfaction. Yes, Okay, I'd have to go and grab
our mystical acts. UM, so they bringing it back. So
the getting back to this, if you're talking about someone, obviously,
one of the big uh potential you is for this
technology is to help people who are severely disabled, who
(12:04):
are perhaps paralyzed or quadriplegic and cannot cannot do a
lot of things for themselves. And so for them it's
a learning experience because they may not know, they may
lack the capability of doing whatever physical action they need
to do to send the command, So it ends up
being more of a learning process both for the person
(12:26):
and for the machine. Yeah, and luckily, our brands are
incredibly plastic, and UM are are very good at still
continuing into adulthood developing new neural paths in order to
figure out these kinds of processes. Right write plastic as
in changeable, not actually right, correct that they are not
made of vinyl. So uh yeah. So you talked about
(12:47):
UM helping people who have severe body disabilities, and I've
actually read about a few of these cases and they're
kind of amazing stories there. UM. In the end of
two thousand twelve, there was a woman who is quadruple
Egypt and UM. From what I've read, it looks like
UM neurobiologists at the University of Pittsburgh Medical Center they
(13:10):
enabled her to through a neural implant as so an
electrode in her brain control a robotic arm with which
she could feed herself. And all it took her was
a couple of days of practice before she could move
this robotic arm with her brain to feed herself a
piece of chocolate. So yeah, the the the good thing
(13:30):
is that people are very are very good at adapting
to this, and that we are getting better and better
at building the hardware and software that can act as
the other half of this system. Right. And I saw
an interesting, uh exoskeleton thing which I thought was pretty cool.
It actually used an e G cap, so uh, you know,
(13:51):
it didn't it was noninvasive, you know. The person This
was for people who who could not walk for themselves.
They would get into this exoskeleton and the cap would
go on top of their heads. It was called a
mind walker, and then they would by by concentrating. I
don't I don't you know, I hope not from one
(14:13):
of those lost years. It's not sleepwalkers. I'll tell you
that's terrible, terrible movie. Don't tell my dad it's like
his favorite movie. Wow. Okay, I have no response to that.
The uh. But at any rate, So there are other
applications as well. But but the main focus has really
been on giving people who otherwise would have difficulty interacting
(14:36):
with their environments more ability to do so through these interfaces. Uh.
And you were alluding earlier, Joe, to the idea of
the wireless approach, because one of the big drawbacks to
this was the fact that you would be tethered to
a machine, and so you had even more limited mobility
because of that. But um, yeah, there looks like, um,
(14:59):
there were resear cheers at Brown University who they came
up with a you just close it off. It's a
it's a wireless neural implant. So it's it's doing the
same job that these other ones we've talked about, where
um that you know, it's reading the electrical impulses in
the brain and it's sending a signal, except this one
(15:19):
is sending it wirelessly. It's got a wireless transmitter, lithium
ion batteries, rechargeable. It's its own little unit and it
transmits wirelessly to the computer that controls UM that controls
the the operate the device right right, whether it's a
computer or whether it's a robotic arm. And yeah, it's
uh it can I think according to what I was reading,
(15:42):
it sounds like right now it has very limited operating
range about a meter away. So you're still that's I mean,
that's still way better than having a court. Yeah, definitely,
And and uh, right now, no humans have had this
implant uh implanted in their heads. They don't know. It's
only been uh used on test animals monkeys and pigs
(16:05):
so far, so no humans have had this procedure done.
And uh, I don't know when that might happen, if
when that might change, but it it's an interesting advance
in the field. So uh, speaking of monkeys, I mean,
there have been some really amazing studies done with monkeys
and these neural implants, and uh, when I was reading
(16:27):
about it, showed that, uh, with some software tweaks, UM
engineers had gotten it to where monkeys could and and
keep in mind, these are monkeys, you know, they're not
people who can understand complex instructions given by researchers and
can really respond to training like this. The monkeys figured
(16:48):
out how to move a computer cursor into a target
zone with I can't remember exactly the percentage now, it
was a significant percentage of what they could do with
their arms, you know, just moving it by hand. Well,
there's there's also idoya. Did you hear about idea? Oh yeah,
the runner. Yeah yeah, So we're talking about a twelve
pound in monkey that, through thought alone, caused a two
(17:12):
hundred pound, five foot tall humanoid robot to walk on
a treadmill. That's awesome. And the monkey was in North Carolina.
The robot was in Japan. Wow. So so cyborg monkey
armies storming the walls of our cities. Excellent, remotely controlled,
remotely controlled by their by their true monkey overlords in
(17:34):
the you know, far away so power chambers. In the future,
there will be monkeys. Yeah. Well, I mean so we've
been talking about how these in the near future, neural
implants and other brand computer interfaces UM could be very
helpful to people who suffer from, say, quadriplegia, or any
(17:57):
condition that that's debilitating physically. You could create a robotic
arm or you could you know, mess with the computer
without being able to move your arms to move a mouse,
right you could communicate in ways that you might not
have been able to before. What happens when these things
get good enough that, as we've talked about before, you
might want them even if you don't need them. Yeah,
(18:17):
it'll be interesting to see. I mean, I would imagine
that unless unless the state of the art is so
amazing for the invasive forms, I can't imagine those ever
really taking off, at least in the foreseeable future. Depends
on what they can do. It would take. I'm to
talk a lot about it. I mean, you know, you
hear all the racket about what is it a homo cyberneticus,
(18:40):
and uh, yeah, I'm thinking, oh, you're talking about like
a species. Yeah, homo space cyber cyberneticus. Yeah, that kind
of thing. So I think I think that we're at
least fifty years away from that personally, I mean, the
it's I think that's pretty safe. It's good number. It's
a good numb at least from but but I mean, honestly, like,
(19:05):
I can't to me, it's hard for now. I say that.
But then as a kid, I never would have imagined
that I could carry something that would have access to
pretty much the sum total of all human knowledge in
my pocket. I didn't think that was gonna happen either,
and it totally did, So I could be wrong about this.
I think it's more likely that we will see and
we already have seen things that are that are essentially
(19:27):
using the e G method to to act as some
sort of control device or um. You know, a toy
like I've seen the toys that you wear. It has
a little helmet that you wear, and through concentration, you're
supposed to be able to control the uh, the path
of an object on a board. There's there's one I
(19:48):
saw at CS a few years ago, and the way
it worked was it had a a little a little
air vent that blew air straight up and you put
a ping pong ball and it would suspend in the
airflow right and that by concentrating, you could either increase
the airflow and make the ping pong fly higher or
decrease and make it fly lower, and a rotating obstacle
(20:10):
course would go around and you would have to try
and maneuver the ping pong ball so it could fit
through hoops or pathways or whatever. So um, you're introducing
the possibility of telepathic gaming, telekinetic game. That's the idea.
I don't know how well these things really were well
right now right now? Yeah, yeah, but I'm saying that
(20:32):
we already are seeing this sort of stuff. Yeah, and
they're still doing research about more, um, you know, noninvasive
ways to figure this kind of thing out and speech
recognition technology. NASA way back in two four actually was
was doing a program that they were attaching sensors under
the chin into the throat because they found out that
when you just think about talking your your muscles still
(20:52):
have some kind of electrical activity that can be picked up.
And so, you know, stuff like that I think holds
some amount of promise. So could what you're saying that
by thinking a word, sensors on the muscles in your
throat could could follow commands even without your speaking. Yeah. Well,
and on top of that, there are other alternatives that
that might uh take some of the load off of
(21:16):
the brain computer interface UM train, I guess because you
could you could look at things like eye tracking software, uh,
and a lot of eye tracking software out there is
getting pretty sophisticated to the point where, um, if you
are capable of moving your eyes, not everyone is. There's
some people that you know locked in syndrome who are
incapable of really making any sort of motion. Um, But
(21:39):
if you're able to move your eyes, then you're able
to control elements on a computer screen. And I've seen
some pretty cool implementations of that, and it all it's
really just using cameras to track your eye movements and
and plot where you are looking on a screen, and
then interpreting the a as a command. So there are
other interfaces out there that we're seeing big advances in
(22:02):
that could either compliment or in some cases, depending upon
the use case scenario replace brain computer interfaces. All depends
on what it's being used for, and you know what
you want the outcome to be. Obviously, it wouldn't work
for every implementation. So can you imagine any situation in
which brain computer interface would be really useful to a
(22:25):
person who has otherwise full control of their body? Spies?
Spies immediately, spies, are you kidding me? A spy? Because
you if you are able to interact with a computer
system without any overt sign that you are doing so,
I would say that it would be very useful for
a spy. So if you want to like start start
your tape recorder, without touching it, or say, or you're
(22:48):
sending a message saying all right, I'm about to I'm
about to be eradicated, so you might want to burn
my apartment, that kind of thing. I mean, I watched
a lot of born identity, you know, even in our
own industry. You know, if, if, if our producer Noel
could send us a text message just by thinking about it,
without having to make the noise of typing on keys
or you know, wave his arms wildly or however else
(23:10):
he gets messages across to us, it would be filthy.
Maybe a filthy message. Noel's just smirking at me right
now and saluting me. You're not getting them. I choose
to ignore them. Well anyway, that that's kind of wraps
up our our discussion about brain computer interfaces. It's a
(23:31):
really interesting field and I'm curious to see how that
develops over time. Uh and guys, if you have any
suggestions for topics we should cover and forward thinking, please
get in touch with us. Our email address is FW
thinking at discovery dot com. Please go to the fw
thinking dot com website because that's where we have all
our videos, are blogs, the podcasts, all our social media stuff.
(23:53):
Is there if we look forward to hearing from you,
and we will talk to you again really soon. Please
get down the ask. For more on this topic and
the future of technology, visit forward thinking dot Com, brought
(24:21):
to you by Toyota. Let's Go Places,