Episode Transcript
Available transcripts are automatically generated. Complete accuracy is not guaranteed.
(00:01):
Hey everybody. Welcome back to the Elon Musk
Podcast. This is a show where we discuss
the critical crossroads, the Shape, SpaceX, Tesla X, The
Boring Company, and Neuralink. I'm your host, Will Walden.
Hello everybody. My name is Alex Toddling.
I'm the second participant in the Neuralink study, but I'm
(00:25):
here to count us down to the demo in 54321.
(00:46):
Hi everyone, welcome to the Neurolink presentation.
This is an update for the progress of the neural link
team. It's been an incredible amount
of progress. This is we're going to start off
high level, generally describingwhat neural links doing.
(01:08):
And then we're going to get havea very deep technical dive so
you can actually get an understanding of what exactly
we're doing at a granular level and what we can do to enhance
human capabilities and ultimately build a great future
for humanity. So that's a that's a neural
inspiring. It's funny thing that me talking
(01:29):
right now is a bunch of neurons firing that then result in
speech that you hear that cause neurons to fire in your brain.
Yeah, part of part of this presentation is about
demystifying the the brain. It is a remarkable organ.
(01:50):
I mean, we are the brain basically when you say you, that
really is you're the brain. Like you can, you can get a, a
heart transplant, you can get a kidney transplant, but I don't
know anyone who's gotten a braintransplant.
So you are your brain and your experiences are these neurons
(02:13):
firing with the trillions of of synapses that somehow lead to
conscious comprehension of the world.
This is something that we have only begun to understand.
We're really just barely at the beginning of understanding of
what is the nature of consciousness.
(02:34):
And I've thought a lot about what, what is consciousness?
What is it? Where does consciousness arise?
Because if you start at the beginning of the universe,
assuming physics is true, the standard model of physics is
true, then you have this Big Bang.
(02:54):
You know the matter condensing into stars, those stars
exploding. A lot of the the atoms that are
in your body right now were onesat the centre of stars.
Those stars exploded, recondensed.
Fast forward 13.8 billion years and here we are.
(03:15):
And somewhere along that very long journey, to us at least,
consciousness erodes or the the molecules started talking to
each other. And it begs the question of what
(03:35):
is consciousness is, is everything conscious?
Maybe it's hard to say. We're along that line, that
there's no sort of discreet point where consciousness didn't
exist and then suddenly does exist.
It seems to be maybe you have a condensation of matter that has
(03:56):
a, a density of like, we don't know what the real, the real
answer is. We don't know what consciousness
is. But with the neural link and the
progress that the company's making, we'll begin to
understand a lot more about consciousness and what does it
mean to, to be along the way, We're, we're going to solve a
(04:23):
lot of, a lot of brain issues where the brains get injured or
damaged in some way or didn't develop in quite the right way.
But there's, you know, there's, there's a lot of brain and spine
injuries that go so along the way.
And I do want to emphasize that this is all going to happen
(04:45):
quite slowly, meaning you'll, you'll see it coming.
Sometimes people think that suddenly there will be vast
numbers of neural links all overthe place.
This, this is not going to be sudden.
You'll be able to watch it happen, you know, over the
course of several years and, andwe go through exhaustive
(05:06):
regulatory approvals. So this is not something that
we're just doing there by ourselves without government
oversight. We're we work closely with the
regulators every step of the way.
We're very cautious with, with neural links in humans.
That's the reason we're not moving faster than we are is
because we're, we're taking great care with, with each
(05:28):
individual to make sure we, we never miss.
And so far we haven't. And I hope that continues into
the future. Every single one of our implants
in humans is working and workingquite well.
And you'll get to hear from someof the people that have received
the implants and cured in their words.
So what we're we're we're creating here with a neural link
(05:49):
device is a generalized input output technology for the brain.
So it's how do you get information into or out of the
brain and do so in a way that does not damage the brain or,
(06:09):
you know, cause any negative side effects.
So it's a very hard problem and generally the the reactions I've
seen to this range from it's impossible to it's already been
done before those those people should meet.
Actually the reality is that there actually have been limited
(06:32):
range of computer interfaces forseveral decades.
On a very basic basis, just whatwe're doing with neural link is
dramatically increasing the bandwidth by mayors magnitude.
So you can, you can a human bandwidth output is less than
(06:54):
one bit per second over the course of a day.
So there's 86,400 seconds in a day.
It's very rare for a person to do more than 86,400 bits of
output per day. You'd have to be really talking
a lot or typing all day and you might exceed that.
So what we're talking about hereis, is going from maybe one bit
(07:16):
per second to ultimately megabits and then gigabits per
second and the ability to do conceptual consensual telepathy.
Now the the input to the brain is much higher because of,
especially because of vision. Depending upon how you count it,
(07:37):
it, it might be on the order of a megabit or in the megabit
range for input primarily due tosight.
So, but even for input, we, we think that can be dramatically
increased to, to the Gigabit plus level.
And, and, and a lot of the, the thinking that we do is which we
(08:02):
take a concept in our mind and we compress that into a small
number of symbols. So when you're trying to
communicate with somebody else, you're actually trying to model
their mind state and, and, and then take perhaps a quite a
complex idea that you have, maybe even a, a complex image
or, or scene or kind of mental video and try to compress that
(08:25):
into a few words or a few keystrokes.
And it's necessarily going to bevery lossy.
Your ability to communicate is very limited by how fast you can
talk and how fast you can type. And what we're talking about is
unlocking that potential to enable you to communicate, like
I said, thousands, perhaps millions of times faster than is
(08:46):
is currently possible. This is an incredibly profound
breakthrough. This would this would be a
fundamental change to what it means to be a human.
So we're, we're starting off with reducing human suffering.
So or, or addressing issues thatpeople have, say if they've been
(09:07):
in an accident or they have someneural disease that's
degenerative so they're losing capability to move their body or
some some kind of injury essentially.
So enabling our first product iscalled telepathy and that
enables someone who has lost theability to command their body to
(09:30):
be able to communicate with the computer and move the mouse and
and actually operate a computer with roughly the same dexterity,
ultimately much more dexterity than a than a human with working
hands. Then the our next product is, is
blind sight, which will enable those who have total loss of
(09:50):
vision, including if they've lost their eyes or the optic
nerve or maybe have never seen were blind, even blind from
birth to be able to see again, initially low resolution, but
ultimately very high resolution and, and then in multiple
wavelengths. So you could be like Geordie La
Forge in Star Trek and you can see in radar, you can see an
infrared, ultraviolet, superhuman capabilities,
(10:13):
Severnetic enhancement essentially.
And then along the way, this should help us understand a lot
more about consciousness. What does it mean to be a
conscious creature? We'll understand vastly more
about the nature of consciousness as a result of
this. And then ultimately, I think
(10:34):
this helps mitigate the civilizational risk of
artificial intelligence. We're, we are actually already,
we're already sort of have 3 layers of thinking.
There's the limbic system, whichis your kind of your instincts,
the cortic, your cortical system, which is your higher
(10:56):
level planning and thinking. And then the tertiary layer,
which is the computers and machines that you interact with,
you like your phone, your all the applications you use.
So people actually are already aCyborg.
You can maybe have an intuitive sense for this by how much you
miss your phone if you leave it behind.
(11:19):
Leaving your phone behind is like, it's almost like missing
limb syndrome, but your phone issomewhat of an extension of
yourself, as is your computer. So you, you already have this
digital tertiary layer, but the bandwidth between your cortex
and your digital tertiary layer is limited by speech and by and
(11:39):
by how fast you can move your fingers and how fast you can
consume information visually. So, so, but I think it's
actually very important for us to address that input output
bandwidth constraint in order for the collective will of
humanity to match the will of artificial intelligence.
(12:03):
That's my intuition at least. So let's see and, and what what
this presentation is mostly about is attracting smart humans
to come and work with us on thisproblem.
So this is not a presentation toraise money or anything like
(12:25):
that. We're actually, you know, very
well funded. We have a lot of great
investors. Some of the smartest people in
the world are invested in neurallink, but we we need smart
humans to come here and help solve this problem.
So with that, let's let's proceed.
(12:55):
Hey everyone, my name is DJ, my Co founder and president of
Neuro Link. And as Elon mentioned, well,
actually we're standing in the middle of our robot space.
We have a stage set up, but you know, this is actually where
some of the next generation mostadvanced surgical robots are
being built. So welcome to our space.
(13:24):
It's important to highlight thatthis technology is not being
built in the dark. This is not a secret lab where
we're not sharing any of the progress.
In fact, we're actually sharing,you know the progress very
openly and as well as also telling you exactly what we're
going to be doing. And we're hoping to progress on
that as as diligently and as safely and as carefully as
(13:46):
possible. So to start off, 2 years ago
when we did our previous fundraising round, we outlined
this path and timeline to 1st Human.
And we currently have a clinicaltrials in the US for a product
that we call Telepathy, which allows users to control phone or
computer purely with their thoughts.
(14:06):
And you're going to see how we do this and what the impact that
this has had. And not only have we launched
this clinical trial, but as of today, we have not just one, but
7 participants and we have an approval.
(14:27):
And we also have an approval to launch this trial in Canada, UK
and the UAE. So I guess before we dive into
what this technology is and whatwe built, but I wanted to
quickly share a video with you guys of when our first five
(14:50):
participants met each other for the first time, so.
Here you go. All right, we have everyone
together. What's up guys?
Thanks everybody for joining. Definitely want to introduce all
of you. Yeah, I'm Nolan, AKA P1.
My name's Alex. I am the second participant in
the Neural link study. I am Brad Smith, the ALS Cypler.
(15:12):
P. Three, my name is Mike.
G4 ALS like Francis. Yeah, I'm RJ, I'm P5, and I
just, yes, I'm trying to do thisone to the team here.
So yeah, appreciate it. And all them Trailblazer, you
(15:33):
know, somebody's got to get first, man.
That was you. Appreciate that.
What's been your favorite thing you've been able to do with the
neural link so far? I've just had a good time being
able to use it as I travel flying and draw a little
mustache on a cat. Had a lot of fun doing that.
I mean, I've just had a good time playing around with it.
Oh, you know what? I do know what my favorite BCI
feature is? Probably not a feature, but I
(15:56):
just I love web grid more than Ilove anything in my life
probably. I think I could play that game
non-stop forever. Has to be Fusion 360.
Being able to design parts, design the hat logo with the
BCI. That's what's up.
Pretty sweet. That's sweet.
(16:16):
Yeah, yeah, I have a little Arduino that takes input from my
quad stick, converts it into APPM signal to go to ARC truck.
Cool Little Rock crawler. Well, with the BCII.
(16:36):
Wrote code, can you? To drive the plane with the quad
stick, that's awesome. The best thing I like about an
airline is being able to continue.
To provide for my family and continue working.
(16:58):
I think my favorite thing is probably been able to turn on my
TV. Yeah, like the first time in 2
1/2 years I was able to do that.So it's pretty sweet.
But I like shooting the hobbies.That's that's kind of nice.
Excited to see what BC is got going on.
I got a question. What's your shirt say?
I said. I'd do a thing called whatever I
want. Now, one of the major figure of
(17:32):
merits that we have is to keep track of monthly hours of
independent PCI use. Effectively, are they using the
PCI and not at the clinic but attheir home.
And what we have noticed and this is a plot of all of the
different participants, first five participants and their
usage per month over the course of the last year and a half.
(17:53):
And we're averaging around 50 hours a week of usage and in
some cases peak usage of more than 100 hours a week, which is
pretty much every waking moments.
So I think it's been incredible to see all of our participants
demonstrating greater independence through their use
(18:15):
of BCI. Not only that, we've also
accelerated our implantation cadence as we've amassed
evidence of both clinical safetyas well as value to our
participants. So to date, we have 4 spinal
cord injury participants as wellas three ALS participants with
the last two surgeries happeningwithin one week of each other.
(18:39):
And we're just beginning. This is just tip of the iceberg.
Our end goal is to really build a whole brain interface.
And what do we mean by whole brain interface?
We mean being able to listen to neurons everywhere, be able to
write information to neurons anywhere, be able to have that
fast data wireless transfer to enable that high bandwidth
(19:01):
connection from our biological brain to the external machines
and be able to do all of this with fully automated surgery as
well as enable 24 hours of usage.
And towards that goal, we're really working on three major
product types. Elon mentioned earlier that our
(19:21):
goal is to build a generalized input output platform and
technology to the brain. So to afford the output portion
of it, which is extremely slow through our meat sticks as as
Elon calls them, neat hands thatare holding the mics.
We're starting out with helping people with movement disorders
(19:44):
either through where they lost the mind body connection either
through a spinal cord injury, ALS or a stroke.
Be able to regain some of that digital as well as physical
independence through a product that we're building called
Telepathy. And this is our opportunities to
build a high channel read and output device.
On the input side of things, there's.
(20:06):
Opportunities for us to help people that have lost the
ability to to see be able to regain that site again through a
product that we're calling blindsight.
And this is our opportunity to build high channel right
capabilities. And last but not least, be able
to also help people that are suffering from neurological
(20:27):
debilitating dysregulation or psychiatric conditions or
neuropathic pain. By inserting our electrodes in
reaching any brain regions to beable to insert them not just on
the cortical layer but into the sulk eyes as well as deeper
parts of the brain. The so-called limbic system to
(20:47):
really enable better opportunities to just regain
some of that independence. Our North Star metrics is 1.
Increasing the number of neuronsthat we can interface with.
And 2nd, to expand to many diverse area any parts of the
brain, starting with microfabrication or lithography
to change the way in which we can actually increase the number
(21:09):
of neurons that we can see from a single channel.
And also doing mixed signal chipdesign to actually increase the
physical channel counts to increase more neurons that we
can interface to, to, to sort ofallow more information from the
brain to the outside world. And then you know, everything we
(21:30):
built from day one of the company has always been read and
write capable. And with telepathy, our first
product, the focus has been on the read capabilities or the
output. And we want to hone in on our
write capability and also show that through accessing deeper
regions within the, the, the visual cortex that we can
(21:50):
actually achieve functional vision.
Let's. Go.
So now just to step you through what the product evolution is
going to look like in the next three years.
Today what we have is 1000 electrodes in the motor cortex,
(22:14):
the part of the small part of the brain that you see in this
animation called the hand knob area that allows participants
control computer cursors as wellas gaming consoles.
Next quarter, we're planning to implant in the speech cortex to
directly decode attentive words from brain signals to speech.
And in 2026, not only are we going to triple the number of
(22:42):
electrodes from 1000 to 3000 formore capabilities, we're
planning to have our first blindside participant to enable
navigation. And in 2027, we're going to
continue increasing channel counts, probably another triple,
(23:04):
so 10,000 channels and also enable for the first time
multiple implants, so not just one in motor cortex, speech
cortex or visual cortex, but allof the above.
And finally, in 2028, our goal is to get to more than 25,000
(23:25):
channels per implant, have multiple of these, have ability
to access any part of the brain for psychiatric conditions, pain
dysregulation and also start to demonstrate what it would be
like to actually integrate with AI.
(23:47):
And all this is to say that we're really building towards
set of fundamental foundational technology that would allow us
to have hundreds of thousands, if not millions of channels with
multiple implants for whole grain interfaces that could
actually solve not just these debilitating neurological
conditions, but be able to go beyond the the limits of our
biology. And this vertical integration.
(24:08):
And the talent and team that we have at Neural Link has been and
will continue to be the key recipe for rapid progress that
we will be making. Just to recap real quick, Neural
Link is implanted with precisionsurgical robot.
It's physically invisible and one week later users are able to
see their thoughts, transform into actions and to share more
(24:28):
about what that experience is like.
I'd like to welcome Sahej to thestation.
What's up guys? My name is Sahej, I'm from the
(24:50):
Brain Computer Interface team here at Neurolink, and I'm going
to be talking about two things today.
The first thing is, what exactlyis the neural link device
capable of doing right now? And the second one is how does
that actually impact the day-to-day lives of our users?
Very Simply put, what the neurallink device does right now is it
(25:10):
allows you to control devices simply just by thinking.
Now to put that a bit more concretely, I'm about to play a
video of our first user. His name is Noland, if you
remember from DJ section. And what Noland is doing is he's
looking at a normal off the shelf MacBook Pro.
And with his Neurolink devices, you're going to see he's going
(25:31):
to be able to control the cursorsimply with his mind.
No eye tracking, no other sensors.
And what's special about this particular moment is this is the
first time someone is using a Neurolink device to fully
control their cursor. This is not your ordinary brain
controlled cursor. This is actually a
(25:53):
record-breaking control literally on day one, beating
decades of brain computer research.
And I'm about to show you the clip on day one, Nolan breaking
the BCI world record. Oh well done man, He's a new
(26:20):
world record holder. I.
Thought it was higher. I thought I would have to get to
5 or something. Oh my gosh, that's crazy.
It's pretty cool, yeah. Another really fun thing you
(26:47):
could do with the newer Link device, outside of controlling a
computer cursor, is you can actually plug it in through USB
through a lot of different devices.
And here we actually have Nolan playing Mario Kart.
Now, what's special about this particular clip is Nolan is not
the only Cyborg playing Mario Kart in this clip.
We actually have a whole community of users, as mentioned
(27:08):
earlier. And this is literally five of
our first users of Neuralink playing Mario Kart together over
call now. Yeah, Mario Kart is, it's cool.
You know, you're using one joystick and then you're
(27:28):
clicking like a couple buttons to throw items.
What would be even cooler is what if you could control 2
joysticks at once simultaneouslywith your mind?
What I'm about to show you, and I think this is for the first
time someone playing a first person shooter game with a brain
computer interface. This is Alex and RJ playing Call
of Duty controlling 1 joystick to move and then the other
(27:51):
joystick to like think, point your gun and then shooting
people as a button. Here's the larger Here's Alex
shooting another person. 'S bikes.
Oh dear God, I know I do and I. Want him to freaking.
Choose long. When I do, I know he's shot me
in the face. Now that we have a bit of a
(28:14):
sense of what the BCI can do, a very important question to
answer is how does this impact the day-to-day lives of the
people that use it every day? So I'm about to show you a clip
going back to Nolan for a secondwhere he talks.
We simply just asked him randomly during a day how he
enjoys using the BCI couple months ago and this is his
(28:36):
candid reaction. I work basically all day from
when I wake up. I'm trying to wake up at like
6:00 or 7:00 AM and I'll do workuntil session.
I'll do session and then I'll work until, you know, 11:12 PM
(28:57):
or 12:00 AM I'm, I'm doing like I'm learning my languages, I'm
learning my math. I'm like relearning all of my
math. I am writing, I am doing the
class that I signed up for and Ijust I wanted to point out that
(29:20):
like this is not something I would be able to do out like
without the neuro link. Next I want to talk a bit about
Brad. You guys may already know him as
the ALS Cyborg and Brad also hasALS and what separates him from
other users is he's actually nonverbal so he can't speak.
Why this is pretty relevant is he relies at least before the
(29:42):
neuro link on an eye gaze machine to communicate and a lot
of eye gaze machines you can't use outdoors.
You really need like a dark room.
So what this means is for the last six years since Brad's been
diagnosed with ALS, he's really unable to leave his house.
Now with the Neuralink device, we're going to show you a clip
of him with his kids at the park, shot by Ashley Vance and
(30:03):
the team Guys ready. No, absolutely.
I am absolutely doing more with Neuralink than I was doing with
eye gaze. I've been a Batman for a long
time, but I go outside now. Going outside has been a huge
blessing for me and I can control the computer with
(30:24):
telepathy. Dad's watching.
OK, He's. Watching on the camera, did he?
Lose. One of the arms the last user I
want to talk about is Alex. You've seen some clips of him
earlier. What's special about Alex to me
is he's a fellow left-handed guywho writes in cursive all the
time. And what he mentioned is since a
spinal cord injury from like, 3-4 years ago, he's been able
(30:47):
unable to just, like, draw or write.
And he always brags about how good his handwriting was.
So we actually got to put in a test.
We gave him a robotic arm. And I think this is the first
time you tried using the roboticarm to write anything.
And this is a spotted version ofwriting at the convoy trial and
drawing something now. Yeah, controlling A robotic arm
(31:15):
is cool, but this one has a clamp.
And what would be cooler is if you could decode the actual
fingers, the actual wrist, all the muscles of the hand in real
time. Just in the past couple weeks we
were able to do that with Alex and you're about to see him and
his uncle playing a game. Rock, paper, scissors shoot.
(31:39):
Rock, paper, scissors shoot. Rock, paper, scissors shoot
rock. Paper.
Scissors. Shoot.
(32:09):
Cool. Controlling.
Yeah, that's pretty dope. I don't know.
And controlling A robotic hand on screen is obviously not super
helpful for most people. Fortunately, we have connections
with Tesla, who have the optimist hand, and we're
(32:30):
actually actively working on giving Alex an optimist hand so
that you could actually control it in his real life.
And here's actual replay of the end of that video using Alex's
neural signals on an Optus hand.Sean, if you want to play that,
(32:53):
Yeah. Actually, let me maybe add a few
things to that, which is so as we advance the neural link
devices, you should be able to actually have full body control
and sensors from an optimist robot.
(33:14):
So you could basically inhabit an optimist robot.
So not just the hand the whole, the whole thing.
So you could like basically mentally remote into an optimist
robot and and be kind of cool. The future's going to be weird,
but but but pretty cool. And then now I was.
(33:41):
Another thing that could be donealso is like for people that
have say, lost a limb, lost an arm or a leg or something like
that, then we think in the future we'll be able to attach
an optimist's arm or legs. And so you kind of like, I
remember that scene from Star Wars where Luke Skywalker gets
(34:02):
his hand, you know, chopped overthe lightsaber and he gets kind
of a robot hand. And I think that's the kind of
thing that we'll be able to do in the future working with the
newer Lincoln Tesla. So, so it goes far beyond just
operating a robot hand, but replacing limbs and, and having
kind of a whole body robot experience.
And then I think another thing that will be possible, I think
(34:25):
it's very likely in the future is to be able to bridge the, the
where the damaged neurons are. So you can take the signal from
the brain and, and transmit thatsignal past where the neurons
are damaged or strained to the rest of the body.
So you could reanimate the body so that if you have a neural
link implant in the brain and then one in the spinal cord,
(34:48):
then you you can actually bridgethe signals and you could walk
again and have full body functionality.
Obviously that's what people would prefer.
To be clear, we realized that that would be the preferred
outcome and and so that even if you have a broken neck or you
could. So we believe I'm, I'm actually
(35:09):
at this point I'd say fairly confident that at some point in
the future we'll be able to restore full body functionality.
Yes. So hello, hello everyone.
My name is Nir and I'm leading the BCR application group.
And I think the videos just thatSahel just shared with you, I
(35:32):
probably watch them maybe thousands of time, but still I
get the goosebump every time I watch them.
And I think this is one of the cool perks here at New Orleans
when you get a job is that you might get goosebump every week
or maybe every few days in good weeks.
And, and this is really fun as an engineer, it's really cool
(35:53):
because you can build a new feature.
You can build a new machine learning model and use software
feature and test it on the same day with the participant and get
feedback. And you already saw with our
first device Telepathy that we can address a very diverse needs
of the different users that we have for moving a cursor to
(36:13):
playing games, to move a roboticarm with multiple fingers.
And we could not have done it without the neural link device.
The neural link device gives us something that no other device
can give us, which is in a single neuron recording from
thousands of channels simultaneously.
The telepathy products is basically recording the neural
activity from the small area in the motor cortex that involve an
(36:36):
execution of hand and arm movements.
But if we go only about two or three inches below, there's
another brain area that's involved in execution of speech.
And with the same device, with the same machine learning model
architecture, the same software pipeline, the same surgical
robot, we can have a new application and we can do it
very quickly. It's really interesting that if
(36:57):
we can decode someone intention to speak silently and non vocal
communication, we can use that to revolutionize the way we
interact with computers, with technology and with information.
Instead of typing with your finger or like moving the mouse
or talking to your phone, you'llbe able to interact with
computer with the speed of thoughts.
(37:19):
It will make this interaction much more, much faster and much
more intuitive. The computers will understand
what you want to do. And we can also expand that to
AI. We can now build an interface
with AI that you will be able toachieve information, will be
able to store our thoughts anywhere, anytime, privately and
(37:40):
silently. Again, because we build a
fundamental technology, a platform, and we do everything
in house. We own the entire stack from
neurons to pixels on the user's computer.
Now I'll pass pass it to RUSE totalk about UI for Visa.
(38:03):
Thank you, NIA. Each spike that our implant
detects goes on a fairly remarkable journey to ultimately
form a pixel on a participant's display.
And that experience starts with,of course, unboxing, the very
first time that a participant pairs to and meets their
implant, this invisible part of their body, and sees their own
(38:26):
spikes materialize across the display.
From there, they'll go into bodymapping and actually imagine
moving their arm again and get afeel for what feels natural to
them and what doesn't. And they'll take that into
calibration, using one of those motions to actually move a
(38:47):
cursor again, iteratively refining their control as they
go throughout this process, until finally they're teleported
back to their desktop and can experience the magic of neural
control for the very first time.And our control interfaces is
where the OS integration that wedo really shines, letting us
(39:12):
adapt both control and feedback for every interaction.
So for familiar interactions like scrolling, we can surface
an indicator over the scrollableparts of the display, add a
touch of gravity to automatically pop a
participant's cursor onto that indicator as they approach, show
(39:32):
the actual velocities that we decode inside of it, and add a
bit of momentum to those velocities to carry them forward
as they glide across the page. There are also unique
interactions that we need to solve for in this space.
For example, when a participant is watching a movie or just
talking to somebody next to them, the brain is very active
(39:53):
still, and that activity can actually induce motion in the
cursor, distracting them from that moment.
So when a participant wants to just get their cursor out of the
way, they can push it into the edge of the display to park it
there. And of course we add gravity to
sort of hold it still, but they can push it out with either just
a firm push or in this case, a gesture.
(40:16):
And of course, it's when it goeswithout saying that all of these
control interfaces are designed hand in hand with our
participants. So huge shout out to both Noland
and Brad for helping us design these two.
And those control interfaces, ofcourse, extend typing.
We have a great software keyboard that does everything
you'd expect it to, popping up when a participant clicks on a
text field, giving them feedbackabout the click on the surface
(40:39):
of the key, and supporting both dictation and swipe.
Hi everyone, I'm Harrison and MLEngineer here at Neural Link.
And I must say, being an ML engineer at Neural Link is a bit
like being a kid in a candy store.
(40:59):
When you think of the inputs to most ML systems out there, you
might think of pixels of tokens or of a user's Netflix watch
history. The input to our systems is a
little different. It is pure raw brain power.
And when we think about the ML systems we can build here at
Neuralink, really we're limited by our imagination and our
creativity. There's no reason our ML systems
(41:21):
can't do anything that the humanbrain can do, such as
controlling a phone, typing, or even gaming.
Right here to my left is actual footage of Alex, one of our
participants, playing a first person shooter against RJ,
another one of our participants.Now, for those unfamiliar with
first person shooters, this is not a trivial feat.
(41:42):
It requires 2 fully independent joysticks or 4 continuous
degrees of control, as well as multiple reliable buttons.
Now contrary to popular belief, the neural link does not simply
read people's minds, it's simplyreading neuronal activations
corresponding to motor intent. So one of the fun challenges
(42:02):
with this project was figuring out which motions were going to
be mapped to the joystick. We started with the typical left
thumb and right thumb, but quickly found that the dominant
hand overshadowed the non dominant hand.
My personal favorite is we had one of our participants imagine
walking for the left joystick and aiming for the right
joystick. So in game, they were simply
doing naturalistic motions like you might do in virtual reality
(42:25):
in Ready Player 1, and that was really cool to watch.
What we ended up on was the thumb for the left joystick and
the wrist for the right joystick.
And I challenge the audience to try to replicate their emotions.
I'm really in all of them being able to pull this off.
I want to talk a bit about the progress to our cursor
calibration experience. To my left, here you can see RJ
(42:45):
completing his first ever cursorcalibration with a redesigned
Open the Flow, where he first gather information about his
intent and how to map the neuralactivity to the first time he
controls a cursor, to the final product where he has smooth and
fluid control of his computer. And most remarkably, this
experience took only 15 minutes from start to finish, 15 minutes
(43:06):
from not 15 minutes from no control to fluid computer use.
Contrast that to a year and a half ago with P1, where that was
multiple hours to get to the same level of control and
several engineers standing around a table pulling their
hair out. There was virtually no need for
(43:27):
neural link engineers to even beat this session.
This was basically an out-of-the-box experience for
our participants. And even more remarkably, we're
continuing to smash day one records, with RJ being able to
achieve 7 BPS on his very first day with a neural link.
(43:49):
Now such an effective and efficient calibration process is
only made possible by high fidelity estimations of a user
intention or labels. And to briefly illustrate just
how challenging of a problem that is, this is an animation of
myself trying to draw circles onmy desktop with a mouse.
Now the task was simple, draw uniform circles at a constant
(44:10):
speed, repeatedly. And as you can see by that
animation, I am horrible at that.
Even though my intent was prettyobvious, unambiguous, the
execution was really poor. There is a ton of variation in
both speed and the shape itself.To visualize this a little
differently, each row here is one of those circles unwound in
(44:31):
time with synchronized starts, and you can just see how much
variation there is in the timingof each circle as well as I'm
doing at any given point in time.
Orthogonal to the labeling problem is neural non
stationarity, or the tendency ofneural signals to drift over
time. And I think that's honestly a
beautiful thing, right? If you if your neural signals
(44:51):
didn't drift, you couldn't grow.When you wake up the next day,
you're not the same person you were the day before.
You've learned, you've grown, you've changed, and so too must
your neural data change. This animation.
Here is a simple illustration ofthe learned representation by
the decoder and how it drifts the further away we get from the
day it was trained on. This is one of the key
challenges we need to solve hereat Neurolink to unlock fluid and
(45:13):
product level experience for ourusers.
Hey everyone, Hey everyone, my name is Joey.
Blind Sight is our project to build a visual prosthesis to
(45:35):
help the blind see again. Users would wear a pair of
glasses with an embedded camera and receive an implant in their
visual cortex. Scenes from the environment are
recorded by the camera and processed in the patterns of
stimulation delivered to the brain, causing visual perception
(45:58):
and restoring functionality. Now blind sight will be enabled
by placing our implant into visual cortex.
This is a new brain area for us,and this brings new
opportunities and challenges. So the surface of the brain for
visual cortex represents just a few degrees of angle in the
center of the visual field. Larger fields of view are
(46:20):
represented deep within the cortical folds of the calcarine
fissure. Our threads are able to access
these deeper structures, providing the possibility of
restoring vision over a functional, useful visual field.
So the N1 implant has had experimental stimulation
capabilities for quite some time, but our new S2 chip is
(46:41):
designed from the ground up for stimulation.
It provides over 1600 channels of electrical stimulation, high
dynamic range recording capabilities and a wide range of
micro stimulation currents and voltages.
We can achieve these capabilities because we are
vertically integrated and we designed this custom ASIC in
(47:02):
house. Similarly, we design and
fabricate our electrode threads in house, and here you can see
one of our standard threads designed for recording in an
electron micrograph for blind sight.
Our requirements are a little different, and our vertical
integration allows us to rapidlyiterate on the design and
manufacturing of these threads for this new purpose.
(47:25):
So here I'm using Red Arrows to highlight the electrode
contacts, which are optimized for stimulation.
And as you can see, they're a little bit larger, which results
in a lower electrical impedance for safe and effective charge
delivery, which is important forblind sight.
Now, how can we calibrate our implant for blind sight?
(47:46):
So here's one way we stimulate on the array, picking say three
different channels. The user perceives something,
say 3 spots of light somewhere in their visual field and points
at them. We track their arm and eye
movements and repeat this process for each of the channels
on the array. And here's what a simulated
(48:08):
example of a blind sight vision could look like after
calibration. Now I showed you how for blind
sight, we need to insert threadsdeeper into the brain than we
have previously, and doing this requires state-of-the-art
(48:30):
medical imaging. So we worked with Siemens to get
some of the best scanners on Earth.
We built out our imaging core from scratch in the past year.
Actually, it was faster than that.
It was about four months from dirt to done.
Since bringing the scanners online, we've scanned over 50
internal participants, building out a database of human
structural and functional anatomy.
(48:52):
What can we do with the imaging information from these scanners
so medical imaging can be used for surgical placement?
It lets us parcel it out brain regions by their function and we
use our imaging capabilities to refine the placement for
telepathy. It also gives us the capability
to target new brain regions for future products such as blind
sight or speech prosthesis. And we're working towards more
(49:14):
capabilities. So one click, automated planning
of surgery from functional images to robot insertion
targets. Here you can see a screen
capture from one of our in housetooling to do end to end
surgical planning. You can see a region of motor
cortex known as hand knob and the thread trajectory plans that
will be sent directly to the robot.
This is a really incredible degree of automation that's only
(49:37):
possible because we're controlling the system from one
end to the other. My name is John and I lead the
robot mechanical team. This is our current R1 robot.
It was used to implant the first7 participants.
This robot works really well, but it has a few flaws.
One of which is the cycle time is rather slow.
(49:59):
So to insert each thread it takes in a best case scenario 17
seconds. And many cases external
disturbances cause us to have toretry to reinsert, grasp that
thread and then reinsert it. To scale our number of neuron or
neurons access through higher channel count, increased numbers
of threads, we need to have a much faster cycle time.
(50:19):
So let me introduce our next generation robot, which is right
here. Through rethinking the way that
we hold the implant in front of the robot, by holding it
directly in front on on the robot head, we will achieve an
(50:39):
11 times cycle time improvement.So each thread takes 1 1/2
seconds. We also scale up a lot of
surgery. Workflow process improvements
through deleting the separate operator station and implant
stand. Now the outside of the robot
looks pretty similar between thetwo, but it's what's inside that
really counts. Each system has been redesigned
(51:01):
from the the ground up with the focus on reliability,
manufacturability, serviceability and using a lot
of our vertical integration techniques.
It's enabled us to have a lot more control of the system end
to end. Now that fast cycle time doesn't
mean much if it's not compatiblewith a significant portion of
the human population. Prior to each surgery we scan
(51:22):
our participants anatomy and ensure that they will be
compatible with the robot and vice versa.
Unfortunately, the robot isn't compatible with everyone so we
had to extend the reach of the needle in the next generation
robot and now we're compatible with more than 99% of the human
population. We've also increased the depth
of the needle can insert threads.
Now we can reach more than 50mm from the surface of the brain,
(51:43):
accessing and enabling new indications.
We have to produce a ton of custom sterile components for
each surgery. We actually supply more than 20
of these parts. Many of these parts are made
through traditional CNC manufacturing capabilities,
which we do just on the other side of this wall actually, and
some custom developed processes like this femtosecond laser
milling used to manufacture the tip of the needle.
(52:05):
Now these processes take quite abit of time, effort and cost.
So let's take a look at how we're going to reduce costs and
time for one of the components. So the current needle cartridge
has a total cycle time of about 24 hours and the machine
components cost about $350.00. The final assembly is performed
by a set of like highly skilled technicians.
(52:26):
They have to glue 150 Micron diameter Canyon onto this wire
EDM machined stainless steel base plate.
They have to Electro Polish a 40Micron wire into a sharp taper
and then they have to thread that 40 Micron wire into the a
60 Micron hole in the Canyon. This is done manually and then
they finally have to laser Weld all the components together.
(52:48):
Next generation needle cartridgetakes only 30 minutes of cycle
time and $15 in component. We were able to delete the wire
EDM machined base plate and the Kenya gluing step by switching
to an insert molded component. So we get a box of these base
plates with the Kenya was already installed for like 1000
of them for like a couple 5-10 dollars a piece.
We also deleted the Electro polishing step with the revised
(53:10):
needle tip geometry, which is also compatible with inserting
the threads through the dura. We have a few revised
manufacturing techniques to delete the manual threading
through a basically a funnel. Rather simple, but it has been a
big impact. And then we're able to delete
the laser alding through using crimping.
(53:32):
Hi. Hi.
I'm Julian. I'm one of the leads on the
implant team. So the way humans communicate
today, if they want to output information, is by using their
hands and their voice, as I'm doing right now.
And if you want to receive information, you use your ears
(53:53):
and your eyes. And of course, that's how you're
receiving this very talk. But we've built this implant and
this implant is very special because it is the first time
that we're able to add a completely new mode of data
transfer into and out of the brain.
If you look at this device in a nutshell, it's really just
(54:14):
sampling voltages in the brain and sending them over radio.
But if you zoom out and look at the system from end to end, what
you actually see is that we're connecting your brain or
biological neural net to a machine learning model or a
silicon neural net on the right hand side.
And I actually think this is really elegant because the
(54:36):
machine learning model on the right hand side is in fact
inspired by neurons on the left hand side.
And so in some sense, we're really extending the fundamental
substrate of the brain. For the first time, we're able
to do this in a mass market product that's a very, very
special piece of hardware. So these are some of the first
(55:02):
implants that we ever built. There are electrodes that were
made with our in house lithography tools.
We have custom ASICS that we also designed in house.
And this was really a platform for us to develop the technology
that allows us to sense micro level volts in the brain across
thousands of channels simultaneously.
(55:22):
We learnt a lot from this. But as you'll notice in the
right to images, there are USBC connectors on these devices.
These were not really the most implantable implants.
This next set of images are the wireless implants, and it was a
complete evolution that we went through to add the battery, the
antenna, the radio, and to make it actually fully implantable.
(55:46):
Once it's implanted, it's completely invisible.
It's very compact, it's modular,and it's a general platform that
you can use in many places in the brain.
Going from that top row to the bottom row is very challenging.
The implant you see on the bottom right here is in fact the
device that we have working in seven participants today, and
(56:06):
it's augmenting their brain every day and restoring their
autonomy. But getting to that point
involved a huge number of formidable engineering
challenges. We first had to make a hermetic
enclosure, passing 1000 separateconductors through the enclosure
of the device. We had to figure out how to make
charging seamless and work with very tight thermal constraints
in a very, very small area. And then we also had to scale up
(56:30):
our testing infrastructure so that we could support large
scale manufacturing and very safe devices and have confidence
in our iteration cycle. So what's next?
We're going to be increasing ourmanufacturing so that we don't
just produce, you know, a certain like a small number of
implants per year, but thousandsand then eventually millions of
implants per year. We're also going to be
(56:50):
increasing channel count. More channels means more neurons
are sensed, which means more capabilities.
In some sense, We often think a lot about the the Moore's Law of
neurons that we're interacting with.
And in the same way that Moore'sLaw propelled forward many
subsequent revolutions in computing, we think that sensing
more and more neurons will also completely redefine how we
(57:13):
interact with computers and reality at large.
I want to leave you with one final thought.
When I was a child I used a 56 KB modem to access the Internet.
If you remember what it's like, you would go to a website.
You're lucky. You're lucky.
Bastard. Yeah, When I was a child we had
(57:35):
acoustic couplers. Oh yeah, OK.
So just beep. Just beep at each other.
Yeah, the the first modem was the acoustic coupler, incredible
device honestly. But then if you, I guess if
you're my age, you started with 56 K bit modem and you, you
(57:55):
would go to a website and and like there would be an image and
it would, it would scroll like slowly it was loading pixel by
pixel on the screen. So that that's what it's like to
be bandwidth limited. Now imagine using the current
Internet with that same modem. It it's like it's inconceivable.
It would be impossible to do so.What broadband Internet did to
(58:17):
the 56 KB modem is what this hardware is going to do to the
brain. We are trying to drastically
expand the amount of bandwidth that you have access to, to have
a much richer experience and superhuman capabilities.
(58:37):
So I guess just to kind of closeout and to recap today Neurolink
is working reliably and has already changed the lives of
seven participants and making a real impact.
And our next milestone is to go to market and enable scaling of
this technology to thousands of people and as well as expand
(58:59):
functionalities beyond just the movement to enable robotic,
sophisticated robotic arm control, speech, vision, give
sight back and even getting to the speed of thought.
I, I hope you got a good sort ofsample of our technology stack
and the challenges that we have and I'd like to hand over the
(59:19):
mic to Elon for any closing remarks.
Well, we're trying to give you asense of the the depth of talent
at Neurolink. There's a lot of really smart
people working on a lot of important problems.
This is one of the most difficult things to to actually
(59:40):
succeed in creating and have it work and work at scale and be
reliable and available for millions of people at an
affordable price. So super hard problem and would
like to have you come join and help us solve it.
Thank you. Hey, thank you so much for
(01:00:15):
listening today. I really do appreciate your
support. If you could take a second and
hit this subscribe or the followbutton on whatever podcast
platform that you're listening on right now, I greatly
appreciate it. It helps out the show
tremendously and you'll never miss an episode.
And each episode is about 10 minutes or less to get you
caught up quickly. And please, if you want to
(01:00:36):
support the show even more, go to patreon.com/stagezero.
And please take care of yourselves and each other, and
I'll see you tomorrow.