Episode Transcript
Available transcripts are automatically generated. Complete accuracy is not guaranteed.
Speaker 1 (00:00):
Welcome to the Neuralink presentation. This is an update for
the progress from the neuralink team. It's been an incredible
amount of progress.
Speaker 2 (00:08):
This is.
Speaker 1 (00:11):
We're going to start off high level generally describing what
neuralink is doing, and then we're going to have a
very deep technical dive so you can actually get an
understanding of what exactly we're doing at a granule level
and what we can do to enhance human capabilities and
ultimately build a great future for humanity. So that's a neuronspiring.
(00:35):
It's funny thing that me talking right now is a
bunch of neurons firing that then result in speech.
Speaker 2 (00:45):
That you here. Of course, neurons to fire in your brain.
Speaker 3 (00:51):
Part of.
Speaker 1 (00:53):
This presentation is about demystifying the brain. It is a
remarkable organ. I mean we are the brain. Basically when
you say you that really is uh, you're the brain.
Like you can you can get a heart transplant, you
can get a kidney transplant, but I don't know anyone
who's gotten a brain transplant.
Speaker 2 (01:14):
So you are your brain.
Speaker 1 (01:19):
And your experiences are these neurons firing with the trillions
of synapses.
Speaker 2 (01:28):
That somehow lead to conscious comprehension of the world.
Speaker 1 (01:32):
This is something that we have only begun to understand.
We're really just barely at the beginning of understanding of
what is the nature of consciousness. And I've thought a
lot about what what is consciousness?
Speaker 2 (01:45):
What is it? Where does consciousness arise?
Speaker 1 (01:51):
Because if you start at the beginning of the universe,
assuming physics is true, the physic current, the standard model
of physics is true, then you have this.
Speaker 2 (02:01):
Big bang, the matter condensing.
Speaker 1 (02:05):
Into stars, those stars exploding, a lot of the atoms
that are in your body. Right now, we're once at
the center of stars, those stars exploded, recondensed. Fast forward
thirteen point eight billion years, and here we are. And
somewhere along that very long journey to us, at least
(02:31):
consciousness arose or the molecules started talking to each other.
And it begs the question of what is consciousness? Is
is everything conscious? Maybe it's hard to say where along
that line that there's no sort of discrete point where
(02:54):
consciousness didn't exist, and then suddenly it does exist. It
seems to be maybe you have a condensation of matter
that has a density of like we don't know what
really the real answer is, we don't know what consciousness is.
But with the neural link and the progress of the
(03:17):
company's making, will begin to understand a lot more about
consciousness and what does it mean to to be. Along
the way, we're we're going to solve a lot of
a lot of brain issues where the brains get uh
(03:40):
injury or damaged in some way or didn't develop in
quite the right way. But there's yeah, there's there's a
lot of brain in spine injuries that will so along
the way. And I do want to have size that
this is all going to happen quite slowly, meaning you'll
you'll see it coming. Sometimes people think that suddenly there
will be vast numbers of neural links all over the place.
(04:04):
This is not going to be sudden. You'll be able
to watch it happen, you know, over the course of
several years. And we go through exhaustive regulatory approvals. So
this is not something that we're just doing there by
ourselves without governor oversight. We work closely with the regulators
(04:26):
every sepular way. We're very cautious with the neural links
and humans. That's the reason we're not moving faster than
we are is because we're taking great care with each
individual to make sure we never miss and so far
we haven't, and I hope that continues into the future.
Every single one of our implants and humans is working
(04:47):
and working quite well, and you'll get to hear from
some of the people that have received the implants and
care it in their words. So what we're creating here
with a neuralink device is a generalized input output.
Speaker 2 (05:04):
Technology for the brain.
Speaker 1 (05:06):
So it's how do you get information into or out
of the brain and do so in a way that
does not damage the brain or you know, of course
any negative side effects. So it's a very hard problem
and generally the reactions I've seen to this range from
(05:28):
it's impossible to it's already been done before those people
should meet. Actually, the reality is that there actually have
been limited brain to computer interfaces for several decades on
a very basic basis. Just what we're doing with neurlink
(05:52):
is dramatically increasing the bandwidth by many or as the magnitude,
so you can you can a human bandwidth output is
less than one bit per second over the course of
a day, so there's eighty six seconds in a day.
It's very rare for a person to do more than
eighty six thousand, four hundred bits of output per day.
(06:15):
You'd have to be really talking a lot or typing
all day and you might exceed that. So what we're
talking about here is going from maybe one bit per
second to ultimately megabits and then gigabits per second, and
the ability to do conceptual consensual tlepathy. Now, the input
(06:39):
to the rate is much higher because especially because of vision,
depending upon how you counted, it might be on the
order of a megabit or in the megabt range for input,
primarily due to site, but even for input, we think
that can be traumatic increased to the.
Speaker 2 (07:02):
Gig of it plus level.
Speaker 1 (07:05):
And a lot of the thinking that we do is
which we take a concept in our mind and we
compress that into a small number of symbols. So when
you're trying to communicate with somebody else, you're actually trying
to model their mind state and then take peraps quite
(07:26):
a complex idea that you have, maybe even a complex
image or scene or kind of mental video, and try
to compress that into a few woods or a few keystrokes,
and it's necessarily.
Speaker 2 (07:37):
Going to be very lossy.
Speaker 1 (07:39):
Your ability to communicate is very limited by how fast
you can talk and how fast you can type, and
what we're talking about is unlocking that potential to enable
you to communicate, like I said, thousands, perhaps millions of
times faster than is currently possible. This is an incredibly
profound breakthrough. This would this would be a fundamental change
(08:03):
to what it means to be a human. So we're
starting off with reducing human suffering, so or addressing issues
that people have, save if they've been in an accident,
or they have some uh neural disease that's degenerative, so
they're losing capability to move their body, or some some
(08:26):
kind of injury. Essentially, so enabling the first products is
called telepathy, and that enables someone who has lost the
ability to command their body to be able to communicate
with a computer and move the mouse and actually operate
a computer with roughly the same dexterity, ultimately much more
(08:47):
dexterity than a human with working hands. Then our next
product is blind sight, which will enable those who have
total loss of visioning they've lost their eyes or the
optic nerve or maybe i've never seen, we're even blind
from birth, to be able to see again, initially low resolution,
(09:09):
but ultimately very high resolution, and then in multiple wavelengths,
so you could be like Jordie LaForge in Star Trek
and you can see in radar, you can see infrared, ultraviolet,
superhuman capabilities, sebernetic enhancement essentially, and then along the way
this should help us understand a lot more about consciousness,
(09:32):
what does it mean to be a conscious creature? Well,
understand vastly more about the nature of consciousness as a
result of this, and then ultimately I think this helps
mitigate the civilizational risk of artificial intelligence. We are actually
(09:53):
already we're ready to sort of have three layers of thinking.
There's the olympic system, which is your kind of your instincts,
your cortical system, which is your higher level planning and thinking.
And then the tertiary layer, which is the computers and
machines that you interact with, like your phone, all the
applications you use.
Speaker 2 (10:16):
So people actually are already a cyborg.
Speaker 1 (10:19):
You can maybe have an intuitive sense for this by
how much you miss your phone if you leave it behind.
Leaving your phone behind is like it's almost like missing
limbs syndrome.
Speaker 2 (10:32):
Your phone is somewhat of an extension of yourself, as
is your computer.
Speaker 4 (10:36):
So you.
Speaker 1 (10:38):
Already have this digital tertie layer, but the bandwidth between
your cortex and your digital tertie layer is limited by
speech and by and by half fast you can move
your fingers, and how fast you can consume information visually.
So so, but I think it's actually very important for
us to address that input output boundwidth constraint in order
(11:03):
for the collective will of humanity to match the will
of artificial intelligence. That's my intuition at least. So let's see,
and what this presentation is mostly about is attracting smart
(11:27):
humans to come and work with us on this problem.
So this is not a presentation to raise money or
anything like that.
Speaker 2 (11:36):
We're actually, you know, very well funded.
Speaker 1 (11:38):
We have a lot of great investors, some of the
smartest people in the world are invested in neur Link.
But we we need smart humans to come here and
help solve this problem. So with that, let's uh, let's proceed.
Speaker 5 (12:04):
Hey everyone, my name is DJ I'm my co founder
and president of Neuralink. And as Elon mentioned, well, actually
we're standing in the middle of our robust space. We
have a stage set up, but you know, this is
actually where some of the next generation, most advanced surgical
robots are being built, so welcome to our space. It's
(12:33):
important to highlight that this technology is not being built
in the dark. This is not a secret lab where
we're not sharing any of the progress. In fact, we're
actually sharing you know, the progress very openly and as
well as also telling you exactly what we're going to
be doing, and we're hoping to progress on that as
as diligently and as safely and as carefully as possible.
(12:56):
So to start off, two years ago, when we did
our previous fundraising round, we outlined this path and timeline
to first human and we currently have a clinical trials
in the US for a product that we call Telepathy,
which allows users to control phone or computer purely with
their thoughts. And you're going to see how we do
(13:17):
this and what the impact that this has had. And
not only have we launched this clinical trial, but as
of today we have not just one, but seven.
Speaker 6 (13:27):
Participants and we have an approval.
Speaker 5 (13:36):
And we also have an approval to launch this trial
in Canada, UK and the UAE. So I guess before
we dive into what this technology is and what we built,
but I wanted to quickly share a video with you
(13:57):
guys of when our first five parts participants met each
other for the first time.
Speaker 2 (14:02):
So here you go.
Speaker 6 (14:03):
All right, we have everyone together up.
Speaker 2 (14:06):
Guys, thanks everybody joining. Want to use of you.
Speaker 7 (14:10):
Yeah, I'm Nolan aka P one.
Speaker 8 (14:13):
My name is Alex. I am the second participant in
the Neuralink study.
Speaker 2 (14:18):
I am Brad Smith. Let's cycle P three.
Speaker 9 (14:22):
My name and my G four have.
Speaker 4 (14:29):
Less like.
Speaker 3 (14:31):
Uh yeah, I'm marj I'm P five and I just yes,
I'm kind of to one of the team heres. So yeah,
appreciate it. No trail blazer, you know, somebody's got to
get first. Man, that was you.
Speaker 2 (14:44):
Appreciate that.
Speaker 10 (14:45):
What's been your favorite thing you've been able to do
with the neuralink so far.
Speaker 7 (14:49):
I've just had a good time being able to use
it as I travel buying and drawn.
Speaker 2 (14:53):
A little mustache on a cat.
Speaker 7 (14:55):
Had a lot of fun doing that. I mean, I've
just had a good time playing around with it.
Speaker 2 (14:59):
Oh you know what, But I.
Speaker 7 (15:00):
Do know what My favorite BCI feature is probably not
a feature, but I just I love web Grid more
than I love anything in my life. Probably I think
I could play that game NonStop forever.
Speaker 8 (15:14):
Has to be Fusion three sixty being able to design parts,
design the hat logo with the BCI.
Speaker 2 (15:22):
That's what's up?
Speaker 11 (15:23):
Pretty sweet?
Speaker 2 (15:24):
That's sweet?
Speaker 8 (15:25):
Yeah, Yeah, I have a little uh are do we know?
That takes input from my quad stick converts it into
a PPM signal to go to an RC truck.
Speaker 9 (15:39):
Cool little rock crawler.
Speaker 8 (15:42):
Well with the BCI, I room code to drive the
plane with the quad stick.
Speaker 6 (15:51):
That's awesome.
Speaker 1 (15:52):
The best thing I like Abunerline is being able to
continue to provide.
Speaker 7 (16:02):
For my family and continue working.
Speaker 3 (16:07):
I think my favorite things probably been able to turn
on my TV. Yeah, like the first time in two
and a half years. I was able to do that.
That's pretty sweet with a right shooting the all base.
That's that's sound nice. Excited to see it. BCIs got
going on?
Speaker 11 (16:22):
They got a car?
Speaker 8 (16:23):
What's your shirts say?
Speaker 11 (16:25):
Is it?
Speaker 3 (16:25):
I do a thing called whatever I want.
Speaker 6 (16:39):
Now.
Speaker 5 (16:39):
One of the major figure of merits that we have
is to keep track of monthly hours of independent DCI use. Effectively,
are they using the BCI and not at the clinic
but at their home? And what we have noticed, and
this is a plot of all of the different participants,
first five participants and their usage per month over the
(17:00):
course of the last year and a half and we're
averaging around fifty hours a week of usage and in
some cases peak usage of more than one hundred hours
a week, which is pretty much every waking moments. So
I think it's been incredible to see all of our
(17:20):
participants demonstrating greater independence through their use of PCI. Not
only that, we've also accelerated our implantation cadence as we've
amassed evidence of both clinical safety as well as value
to our participants. So to date, we have four spinal
cord injury participants as well as three ALS participants, with
(17:42):
the last two surgeries happening within one week of each other,
and we're just beginning.
Speaker 6 (17:50):
This is just tip of the iceberg.
Speaker 5 (17:52):
Our end goal is to really build a whole brand interface.
And what do we mean by whole brand interface. We
mean being able to listen to neurons everywhere, be able
to write information to neurons anywhere, be able to have
that fast data wireless transfer, to enable that high bandwidth
connection from our biological brain to the external machines, and
(18:15):
be able to do all of this with fully automated surgery,
as well as enable twenty four hours of usage, and
towards that goal, we're really working on three major product types.
Elon mentioned earlier that our goals to build a generalized
input output platform and technology to the brain so to
(18:37):
afford the output portion of it, which is extremely slow,
through our meat sticks, as Elon calls them.
Speaker 6 (18:46):
Meet hands that are holding the mics.
Speaker 5 (18:50):
We're starting out with helping people with movement disorders, either
through where they lost a mind body connection, either through
a spinal cor injury ALS or a stroke, be able
to regain some of that digital as well as physical
independence through a product that we're building called Telepathy, and
this is our opportunities to build a high channel read
and output device. On the input side of things, there's
(19:15):
opportunities for us to help people that have lost the
ability to see be able to regain that site again
through a product that we're calling blind Site, and this
is our opportunity to build high channel right capabilities and
last but not least, be able to also help people
that are suffering from neurological debilitating dysregulation or psychiatric conditions
(19:39):
or neuropathic pain by inserting our electros in reaching any
brain regions to be able to insert them not just
on the cortical layer, but into the sulk guys as
well as deeper parts of the brain, the so called
limbic system, to really enable better opportunities to just regain
some of that independence. Our north Star metrics is one
(20:03):
increasing the number of neurons that we can interface with,
and second to expand to many diverse areas any parts
of the brain, starting with microfabrication or lithography to change
the way in which we can actually increase the number
of neurons that we can see from a single channel,
and also doing mixed signal chip design to actually increase
(20:26):
the physical channel counts to increase more neurons that we
can interface to to sort of allow more information from
the brain to the outside world.
Speaker 2 (20:37):
And then you.
Speaker 5 (20:38):
Know, everything we built from day one of the company
has always been read and write capable, and with Telepathy
our first product, the focus has been on the read
capabilities or the output and we want to hone in
on our right capability and also show that through accessing
deeper regions within the visual cortex that we can actually
(21:00):
achieve functional vision. So now just to step you through
what the product evolution is going to look like in
the next three years, today, what we have is one
(21:20):
thousand electrodes in the motor cortex, the part of the
small part of the brain that you see in this
animation called the hand knob area that allows participants control
computer cursors as well as gaming consoles.
Speaker 6 (21:33):
Next quarter, we're planning to implant in.
Speaker 5 (21:36):
The speech cortex to directly decode attentive words from brain
signals to speech. And in twenty twenty six, not only
are we going to triple the number of electrodes from
one thousand to three thousand for more capabilities, we're planning
to have our first blind site participant to enable navigation.
(22:08):
And in twenty twenty seven we're going to continue increasing
channel counts, probably another triple, so ten thousand channels, and
also enable for the first time multiple implants, so not
just one in mortal cortex, speech cortex, or visual cortex,
but all of the above. And finally, in twenty twenty eight,
(22:31):
our goals to get to more than twenty five thousand
channels per implant, have multiple of these, have ability to
access any part of the brain for psychiatric conditions, pain dysregulation,
and also start to demonstrate.
Speaker 6 (22:45):
What it would be like to actually integrate with AI,
and all of.
Speaker 5 (22:57):
This is to say that we're really building towards set
of fundamental foundational technology that would allow us to have
hundreds of thousands, if not millions, of channels with multiple
implants for whole bit interfaces that could actually solve not
just these devoltating neurological conditions, but be able to go
beyond the limits of our biology.
Speaker 6 (23:15):
And this vertical integration.
Speaker 5 (23:17):
And the talented team that we have at Neuralink has
been and will continue to be the key recipe for
rapid progress that we will be making. Just to recap
real quick, neuralink is implanted with precision surgical robot. It's
physically invisible, and one week later users are able to
see their thoughts transform into actions and to share more
(23:37):
about what that experience is like. I like to welcome
to Hedge to the stage.
Speaker 12 (23:56):
What's up, guys. My name is Sahedge. I'm from the
Brain Computer Inner team here at Neuralink, and I'm going
to be talking about two things today. The first thing
is what exactly is a neuralink device capable of doing
right now? And the second one is how does that
actually impact the day to day lives of our users?
Very simply put, what the neuralink device does right now
(24:19):
is it allows you to control devices simply just by thinking. Now,
to put that a bit more concretely, I'm about to
play a video of our first user. His name is Noland,
if you remember from DJ section, And what Nolan is
doing is he's looking at a normal off the shelf
MacBook Pro and with his neuralink devices, you're going to
(24:39):
see he's going to be able to control the cursor
simply with his mind, no eye tracking, no other sensors.
And what's special about this particular moment is this is
the first time someone is using a neuralink device to
fully control their cursor. This is not your ordinary brain
controlled cursor. This is actually a record breaking control, literally
(25:04):
on day one, beating decades of brain computer research. And
I'm about to show you the clip on day one,
Nolan breaking the b c I world record.
Speaker 9 (25:20):
Ship good.
Speaker 12 (25:26):
Man, he's a new world record holder.
Speaker 2 (25:38):
There's one of surprise.
Speaker 9 (25:40):
I thought it was higher.
Speaker 7 (25:41):
I thought I would have to get to five or something.
Speaker 9 (25:44):
Oh my gosh, that's crazy.
Speaker 11 (25:48):
It's pretty cool.
Speaker 2 (25:52):
Uh yeah.
Speaker 12 (25:55):
Another really fun thing you could do with the neuralink
device outside of controlling a computer cursors. You can actually
plug it in through USB through a lot of different devices.
And here we actually have Nolan playing Mario Kart. Now,
what's special about this particular clip is Nolan is not
the only cyborg playing Mario Kart.
Speaker 2 (26:13):
In this clip.
Speaker 12 (26:14):
We actually have a whole community of users as mentioned earlier,
and this is literally five of our first users of
neuraling playing Mario Kart together over call. Yeah, now, yeah,
Mario Kart.
Speaker 2 (26:34):
Is it's cool?
Speaker 12 (26:35):
You know you're using one joystick and then you're clicking
like a couple of buttons to throw items. What would
be even cooler is what if you could control two
joysticks at once simultaneously with your mind. When I'm about
to show you, and I think this is for the
first time someone playing a first person shooter game with
a brain computer interface. This is Alex and r J
(26:55):
playing Call of Duty, controlling one joystick to move and
then the other stick to I think, point your gun,
and then shooting people as a button. Uh, here's the
larger here's Alex shooting another person.
Speaker 3 (27:09):
Oh dear God, I don't know what to do, and
I'm wanting to freaking shoot you. When I do, I
notice shot in the face.
Speaker 12 (27:22):
Now that we have a bit of a sense of
what the bc I can do, a very important question
to answer is how does this impact the day to
day lives of the people that use it every day.
So I'm about to show you a clip going back
to Nolan for a second, where he talks. We simply
just asked him randomly during the day how he enjoys
using the bc I a couple of months ago, and
(27:45):
this is his candid reaction.
Speaker 13 (27:46):
I work basically all day from when I wake up
trying to wake up at like six or seven am,
and I'll do work until session. I'll do session, and
then I'll work until you know, eleven, twelve pm or
twelve am. While I'm doing, like I'm learning my languages,
(28:14):
I'm learning my math, I'm like relearning all of my math.
I am writing, I am doing the class that I
sign up for, And I just I wanted to point
out that, like, this is not something I would be
able to do out like without the NEURALNK.
Speaker 12 (28:35):
Next, I want to talk a bit about Brad. You
guys may already know him as the ALS cyborg, and
Brad also has ALS and what separates him from our other
users is he's actually nonverbal, so he can't speak. Why
this is pretty relevant is he relies, at least before
the neuralink on an igaze machine to communicate, and a
lot of I Gaze machines you can't use outdoors. You
(28:57):
really need like a dark room. So what this means
is for the last six years since Brad's been diagnosed
with alas, he's really unable to leave his house. Now
with the neuralink device, We're gonna show you a clip
of him with his kids at the park, shot by
Ashley Vance and the team.
Speaker 3 (29:12):
Okay here ready, you absolutely do.
Speaker 14 (29:17):
I am absolutely doing more with Neuralink than I was
doing with I Games. I have been a batman for
a long time, but I go outside now. Going outside
has been a huge blessing for me, and I can
control the computer with telepody.
Speaker 9 (29:34):
Dad's watching.
Speaker 15 (29:36):
Look, he's watching on the camera.
Speaker 14 (29:39):
One of the arms.
Speaker 12 (29:40):
The last user I want to talk about is Alex.
You've seen some clips of him earlier. What's specially about
Alex to me is he's a fellow left handed guy
who writes some cursive all the time. And what he
mentioned is since his spinal cord injury from like three
four years ago. He's been unable to just like draw
or write, and he always brags about how good his
(30:01):
handwriting was. So we actually got to put it in
the test. We gave him a robotic arm. And I
think this is the first time you tried using the
robotic arm to write anything. And this is a spouta
version of writing at the Convoy trial and drawing something.
Speaker 2 (30:16):
Sounds like that.
Speaker 12 (30:22):
Now, Yeah, controlling a robotic arm is cool. Uh, but
this one has a clamp. And what would be cooler
is if you could decode the actual fingers, the actual wrist,
all the muscles of the hand in real time.
Speaker 2 (30:35):
Just in the past couple of weeks.
Speaker 12 (30:37):
Uh, we were able to do that with Alex, and
you're about to see him and his uncle, uh playing
a game.
Speaker 16 (30:43):
Rock Favor Scissors, Shoot damn It, rock vapor saissors, Shoot
rock paper says or shoot rock paper saissor shoot that.
Speaker 12 (31:07):
Some more cool controlling. Yeah, that's pretty dope.
Speaker 11 (31:21):
I don't know.
Speaker 12 (31:26):
And uh, controlling a robotic hand on screen is obviously
not super helpful for most people. Fortunately, we have connections
with Tesla who have the Optimist hand, and we're actually
actively working on giving Alex an Optimist hand so that
you could actually control it in his real life. And
(31:48):
here's actual replay of the end of that video using
Alex's neural signals on an optist hand.
Speaker 2 (31:53):
Sean, if you want to play that.
Speaker 11 (32:02):
Yeah, actually let me.
Speaker 1 (32:07):
Let me maybe add a few things to that, which
is so as we advance the neural link devices, your
Chevelte actually have a full body control and sensors from
an Optimist robot, So you could basically inhabit an Optimist robot.
It's not just the hand, the whole the whole thing.
(32:30):
So you could like basically mentally remote into an Optimist
robot and.
Speaker 2 (32:37):
And uh, be kind of cool. The future is going
to be weird, but but but pretty cool.
Speaker 1 (32:47):
And then now another thing that can be down also
is like for people that have say lost a limb,
lost an armor like or something like that, then we
think in the future will be able to attach an
optimus armor legs. And so you kind of like, I
remember that scene from Star Wars where Luke Skywalker gets
(33:10):
his hand, you know, chopped over with the lightslaber and
he gets kind of a robot hand, and I think
that's the kind of thing that we'll be able to
do in the future working with the neuralink in Tesla,
so that it goes far beyond just operating a robot hand,
but replacing limbs and having kind of a whole body
robot experience.
Speaker 2 (33:30):
And then I think.
Speaker 1 (33:30):
Another thing that will be possible, I think is very
likely in the future, is to be able to bridge
where the damaged neurons are, so you can take the
signal from the brain and transmit that signal past where
the neurons are damaged or strained to the rest of
the body, so you could reanimate the body, so that
(33:52):
if you have a neurallink implant in the brain and
then one in the spinal cord, then you can actually
bridge the signals and you could walk again and have
full body functionality.
Speaker 2 (34:04):
Obviously that's what people would prefer.
Speaker 1 (34:06):
To be clear, we realize that that would be the
preferred outcome, and so that even if you have a
broken neck or if you saw we believe I'm actually
at this point I'd say fairly confident that at some
point in the future, well, we're able to restore full
body functionality.
Speaker 4 (34:31):
Yeah, so hello, hello everyone. My name is Near and
I am leading the BCA application group. And I think
the video is that just shared with you. I probably
watch them maybe thousands of time, but still I get
a goose bump every time I watch them. And I
think this is one of the cool perks here at
new Link when you get the job, is that you
might get goose bump every week or maybe every few
(34:53):
days in good weeks. And and this is really fun
as an engine. It's really cool because you can build
a new feature. You can build a new machine learning
model and new software feature and test it on the
same day with a participant and get feedback. And you
already saw with our first device, Telepathy, that we can
(35:16):
address a very diverse needs of the different users that
we have for moving a cursor, to playing games, to
move everybody can with multiple fingers, and we could not
have done it without the new Link device. The neural
Link device gives us something that no other device can
give us, which is a single neuron recording from thousands
(35:37):
of channels simultaneously. The Telepathy products is basically recording the
neural activity from the small area and the motor context
that involve in execution of hand and arm movements. But
if we go only about two or three inches below
there's another branaria that's involved in execution of speech and
with the same device, with the same machine learning model architecture,
(35:59):
the same soft through a pipeline, the same surgical robot.
We can have a new application and we can do
it very quickly. It's really interesting that if we can
decode someone intention to speak silently and non vocal communication,
we can use that to revolutionize the way we interact
with computers, with technology and with information. Instead of typing
(36:20):
with your finger or like moving the mouse or talking
to your phone, you'll be able to interact with computer
with the speed of thoughts. It will make this interaction
much more, much faster, and much more intuitive. The computers
will understand what you want to do. And we can
also expand that to AI. We can now build an
interface with AI that you will be able to achieve information,
(36:44):
will be able to start our thoughts anywhere, anytime, privately
and silently. Again, because we build a fundamental technology a
platform and we do everything in house, we own the
intense stack from neons to pixels on the users. Now
I'll pass passy to rouse to talk about UI for be.
Speaker 17 (37:04):
Said, Thank you nil each spike that our implant detects
goes on a fairly remarkable journey to ultimately form a
pixel on a participant's display, and that experience starts with,
of course, unboxing, the very first time that a participant
(37:28):
pairs to and meets their implant, this invisible part of
their body and sees their own spikes materialize across the display.
From there, they'll go into body mapping and actually imagine
moving their arm again and get a feel for what
feels natural to them and what doesn't, and they'll take
(37:49):
that into calibration, using one of those motions to actually
move a cursor again, iteratively refine their control as they
go throughout this process, until finally they're teleported back to
their desktop and can experience the magic of neural control
(38:12):
for the very first time. And our control interfaces is
where the OS integration that we do really shines, letting
us adapt both control and feedback for every interaction. So
for familiar interactions like scrolling, we can surface an indicator
over the scrollable parts of the display at a touch
(38:34):
of gravity to automatically pop a participant's cursor onto that
indicator as they approach.
Speaker 9 (38:41):
Show the actual velocities.
Speaker 17 (38:42):
That we decode inside of it and add a bit
of momentum to those velocities to carry them forward as
they glide across the page. They are also unique interactions
that we need to solve for in this space. For example,
when a participant is watching a movie or just talking
to somebody next to them, the brain is very active
still and that activity can actually induce motion in the cursor,
(39:05):
distracting them from that moment. So when a participant wants
to just get their cursor out of the way, they
can push it into the edge of the display to
park it there, and of course we add gravity to
sort of hold it still, but they can push it
out with either just a firm push or in this case,
a gesture, and of course it's what it goes. Without
(39:27):
saying that, all of these control interfaces are designed hand
in hand with our participants, so huge shout out to
both Nolan and Brad for helping us design these two
and those control interfaces, of course extend typing. We have
a great software keyboard that does everything you'd expect it to,
popping up when a participant clicks on a text field,
giving them feedback about the click along the surface of
(39:48):
the key and supporting both dictation and swipe.
Speaker 18 (39:58):
Hi everyone, I'm Harrison and engineer here at Neuralink, and
I must say being an mL engineer at Neuralink is
a bit like being a kid in a candy store.
When you think of the inputs to most mL systems
out there, you might think of pixels of tokens or
of a user's Netflix watch history. The input to our
systems is a little different. It is pure raw brain power.
(40:22):
And when we think about the mL systems we can
build here at Neuralink, really we're limited by our imagination
and our creativity. There's no reason our mL systems can't
do anything that the human brain can do, such as
controlling a phone, typing, or even gaming.
Speaker 6 (40:38):
Right here to my.
Speaker 18 (40:39):
Left is actual footage of Alex, one of our participants,
playing at first person shooter against RJ, another one of
our participants. Now, for those unfamiliar with first person shooters,
this is not a trivial feat. It requires two fully
independent joysticks or four continuous degrees of control, as well
as multiple reliable buttons. Now, contrary to popular belief, the
(41:02):
Neuralink does not simply read people's minds it's simply reading
neuronal activations corresponding to motor intent.
Speaker 9 (41:10):
So one of the fun.
Speaker 18 (41:11):
Challenges with this project was figuring out which motions we're
going to give out to the joystick. We started with
the typical left thumb and right thumb, but quickly found
the dominant hand overshadow the non dominant hand. My personal
favorite is we had one of our participants imagine walking
for the left joystick and aiming for the right joystick,
so in game, they were simply doing naturalistic motions like
(41:33):
you might do in virtual reality in Ready Player one,
and that was really cool to watch.
Speaker 9 (41:37):
What we ended up on was the thumb for.
Speaker 18 (41:39):
The left joystick and the wrist for the right joystick,
and I challenged the audience to try to replicate their motions.
I'm really in all of them being able to pull
this off. I want to talk a bit about the
progress to our cursor calibration experience.
Speaker 2 (41:53):
To my left.
Speaker 18 (41:53):
Here you can see RJ completing his first ever cursor
calibration with a redesigned open up flow where he first
gathering for about his intent and how to map the
neural activity to the first time he controls a cursor
to the final product where he has smooth and fluid
control of his computer. And most remarkably, this experience took
only fifteen minutes from a start to finish, fifteen minutes
(42:15):
from not fifteen minutes from no control to fluid computer use.
Contrast that to a year and a half ago with
P one, where that was multiple hours to get to
the same level of control and several engineers standing around
a table pulling their hair out. There was virtually no
(42:36):
need for neuralink engineers to even be at this session.
This was basically it out of the box experience for
our participant, and even more remarkably, we're continuing to smash
day one records, with RJ being able to achieve seven BPS.
Speaker 9 (42:52):
On his very first day with a neuralink.
Speaker 18 (42:58):
Now, such an effective and fishing calibration process is only
made possible by high fidelity estimations of a user intention
or labels. And to briefly illustrate just how challenging a
problem that is, this is an animation of myself trying
to draw circles on my desktop with a mouse. Now,
the task was simple, draw uniform circles at a constant speed,
(43:20):
repeatedly and as you can see by that animation, I
am horrible at that.
Speaker 2 (43:25):
Even though my.
Speaker 18 (43:25):
Intent was pretty obvious unambiguous, the execution was really poor.
There is a ton of variation in both speed and
the shape itself. To visualize this a little differently, each
row here is one of those circles unwound in time
with synchronized starts, and you can just see how much
variation there is in the timing of each circle as
well as what I'm doing at any given point in time.
(43:49):
Orthogonal to the labeling problem is neural nonstationarity, or the
tendency of neural signals to drift over time. And I
think that's honestly a beautiful thing, right If your neurals
signals didn't drift, you couldn't grow. When you wake up
the next day, you're not the same person you were
the day before. You've learned, you've grown, You've changed, and
so too must your neural data change. This animation here
(44:11):
is a simple illustration of the learned representation by the
decoder and how it drifts the further away we get
from the day it was trained on. This is one
of the key challenges we need to solve here At
neuralink to unlock fluid and a product level experience for
our users.
Speaker 15 (44:32):
Everyone here, everyone, My name is Joey. Blind Sight is
our project to build a visual prosthesis to help the
blind see again. Users would wear a pair of glasses
with an embedded camera and receive an implant in their
(44:54):
visual cortex. Scenes from the environment are recorded by the
camera and processed in the patterns of stimulation delivered to
the brain, causing visual perception and restoring functionality. Now blind
Site will be enabled by placing our implant into visual cortex.
This is a new brain area for us, and this
(45:16):
brings new opportunities and challenges. So the surface of the
brain for visual cortex represents just a few degrees of
angle in the center of the visual field. Larger fields
of view are represented deep within the cortical folds of
the calcarine fissure. Our threads are able to access these
(45:36):
deeper structures, providing the possibility of restoring vision over a functional,
useful visual field. So the N one implant has had
experimental stimulation capabilities for quite some time, but our new
S two chip is designed from the ground up for stimulation.
It provides over sixteen hundred channels of electrical stimulation, high
(45:58):
dynamic range recording abilities, and a wide range of microstimulation
currents and voltages. We can achieve these capabilities because we're
vertically integrated and we design this CUSTOMASIC in house. Similarly,
we design and fabricate our electrode threads in house, and
here you can see one of our standard threads designed
(46:20):
for recording in an electron micrograph. For blind site, our
requirements are a little different and our vertical integration allows
us to rapidly iterate on the design and manufacturing of
these threads for this new purpose. So here I'm using
red arrows to highlight the electrode contacts which are optimized
for stimulation, and as you can see, they're a little
(46:42):
bit larger, which results in a lower electrical impedance for
safe and effective charge delivery, which is important for blind site. Now,
how can we calibrate our implant for blind site? So
here's one way. We stimulate on the array, picking say
three different channels. The user perceives something say three spots
(47:04):
of light somewhere in their visual field and points at them.
We track their arm and eye movements and repeat this
process for each of the channels on the array, and
here's what a simulated example of a blind site vision
could look like after calibration. Now I showed you how
(47:32):
for blind site we need to insert threads deeper into
the brain than we have previously and doing this requires
state of the art medical imaging. So we worked with
Semens to get some of the best scanners on earth.
We built out our imaging core from scratch in the
past year. Actually it was faster than that. It was
about four months from dirt to done. Since bringing the
(47:54):
scanners online, we scanned over fifty internal participants, building out
a database of human structural and functional anatomy. What can
we do with the imaging information from these scanners? So
medical imaging can be used for surgical placement. It lets
us parcelate out brain regions by their function, and we
use our imaging capabilities to refine the placement for telepathy.
(48:15):
It also gives us the capability of target new brain
regions for future products such as blind site or speech
pres thesis, and we're working towards more capabilities. So one
click automated planning of surgery from functional images to robot
insertion targets. Here you can see a screen capture from
one of our in house tooling to do end to
end surgical planning. You can see a region of motor
(48:35):
cortex known as hand knob and the thread trajectory plans
that'll be sent directly to the robot. This is a
really incredible degree of automation that's only possible because we're
controlling the system from one end to the other.
Speaker 10 (48:54):
My name is John and I lead the robot mechanical team.
This is our current R one robot. It was used
to implant the first seven participants. This robot works really well,
but it has a few flaws, one of which is
the cycle time is rather slow, so to insert each
thread it takes in the best case scenario seventeen seconds,
and many cases external disturbances cause us to have to
(49:16):
retry to grasp that thread and then reinsert it. To
scale our number of neurons access through higher channel acount
increase numbers of threads, we need to have a much
faster cycle time. So let me introduce our next generation robot,
which is right here. Through rethinking the way that we
(49:43):
hold the implant in front of the robot, by holding
it directly in front on the robot, head. We will
achieve an eleven times cycle time improvement, so each thread
takes one and.
Speaker 2 (49:52):
A half seconds.
Speaker 10 (49:53):
We also scale up a lot of surgery workflow process
improvements through deleting the separate an operator station and implant
stand Now the outside of the robot looks pretty similar
between the two, but what's inside that really counts. Each
system has been redesigned from the ground up with a
focus on re liability, manufacturability, serviceability, and using a lot
(50:16):
of our vertical integration techniques. It's enabled us to have
a lot more control of the system and to end
now that fast cycle time doesn't mean much if it's
not compatible with a significant portion of the human population.
Prior to each surgery, we scan our participants anatomy and
ensure that they will be compatible with the robot and
vice versa. Unfortunately, the robot isn't compatible with everyone, so
(50:39):
we had to extend the reach of the needle in
the next generation robot, and now we're compatible with more
than ninety nine percent of the human population. We've also
increased the depth that the needle can insert threads. Now
we can reach more than fifty millimeters from the surface
of the brain, accessing and enabling new indications. We have
to produce a ton of custom sterile components for each surgery.
Speaker 6 (50:59):
We actually supple more than twenty of these parts.
Speaker 10 (51:01):
Many of these parts are made through traditional C and
C manufacturing capabilities, which we do just on the other
side of this wall actually, and some custom developed processes
like this pemptosecond laser milling used to manufacture the tip
of the needle. Now, these processes take quite a bit
of time, effort, and cost, So let's take a look
at how we're going to reduce costs and time for
(51:22):
one of the components. So the current needle cartridge has
a total cycle time of about twenty four hours and
the machine components cost about three hundred and fifty dollars.
The final assembly is performed by a set of like
highly skilled technicians. They have to glue a one hundred
and fifty micron diameter canua onto this wireedm machined, seainless
steeled base plate. They have to electro polish a forty
(51:43):
micron wire into a sharp taper, and then they have
to thread that forty micron wire into a sixty micron
hole in the canua. This is done manually and then
they finally have to laser weld all the components together.
Next generation needle cartridge takes only thirty minute of cycle
time and fifteen dollars in component. We were able to
delete the wire idium machined base plate and the canuo
(52:06):
Gulian step by switching to an insert mold it component.
So we get a box of these base plates with
the canules already installed, for like a thousand of them
for like a couple five ten dollars apiece. We also
deleted the electropolishing step with the revised needle tip geometry,
which is also compatible with inserting the threads through the dura.
We have a few revised manufacturing techniques to delete the
(52:28):
manual threading through basically a funnel. Rather simple, but it
has been a big impact. And then we're able to
delete the laser welding through using crimping.
Speaker 2 (52:41):
By Hi.
Speaker 19 (52:47):
I'm Julian. I'm one of the leads on the implant team.
So the way humans communicate today, if they want to
output information is by using their hands and their voice
as I'm doing right now, And if you want to
receive information, you use your ears and your eyes, and
of course that's how you're receiving this very torque but
we've built this implant, and this implant is very special
(53:11):
because it is the first time that we're able to
add a completely new mode of data transfer into and
out of the brain. If you look at this device
in a nutshell, it's really just sampling voltages in the
brain and sending them over radio. But if you zoom
out and look at the system from end to end,
what you actually see is that we're connecting your brain
(53:34):
or a biological neural net to a machine learning model
or a silicon neural net on the right hand side.
And I actually think this is really elegant because the
machine learning model on the right hand side is in
fact inspired by neurons on the left hand side, and
so in some sense, we're really extending the fundamental substrate
(53:55):
of the brain for the first time. We're able to
do this in a mass market product. That's a very
very special piece of hardware. So these are some of
the first implants that we ever built. There are electrodes
(54:15):
that were made with our in house lithography tools. We
have custom A six that we also designed in house,
and this was really a platform for us to develop
the technology that allows us to sense micro level vaults
in the brain across thousands of channels simultaneously. We learn
a lot from this. But as you'll notice in the
right to images, there are USBC connectors on these devices.
(54:38):
These were not really the most implantable implants. This next
set of images are the wireless implants, and there was
a complete evolution that we went through to add the battery,
the antenna, the radio, and to make it actually fully implantable.
Once it's implanted, it's completely invisible. It's very compact, it's modular,
(55:00):
and it's a general platform that you can use in
many places in the brain. Going from that top row
to the bottom row is very challenging. The implant you
see on the bottom right here is in fact the
device that we have working in seven participants today and
it's augmenting their brain every day and restoring their autonomy.
But getting to that point involved a huge number of
(55:23):
formidable engineering challenges. We first had to make hometic enclosure,
passing one thousand separate conductors through the enclosure of the device.
We had to figure out how to make charging seamless
and work with very tight thermal constraints in a very
very small area, and then we also had to scale
up our testing infrastructure so that we could support large
scale manufacturing and very safe devices and have confidence in
(55:45):
our iteration cycle.
Speaker 9 (55:47):
So what's next.
Speaker 19 (55:48):
We're going to be increasing our manufacturing so that we
don't just produce a certain like a small number of
implants per year, but thousands and then eventually millions of
implants per year. We're also going to be increase seeing
channel count. More channels means more neurons are sensed, which
means more capabilities. In some sense, we often think a
lot about the Moore's law of neurons that were interacting
(56:11):
with and in the same way that Moore's law propelled
forward many subsequent revolutions in computing, we think that sensing
more and more neurons will also completely redefine how we
interact with computers and reality at large.
Speaker 9 (56:26):
I want to leave you with one final thought.
Speaker 19 (56:29):
When I was a child, I used a fifty six
kilo bit moodem to access the Internet. If you remember
what it's like, you would go to a website.
Speaker 2 (56:38):
You're lucky, You're lucky, bastard. Yeah, when I was a child,
we had acoustic couplers.
Speaker 1 (56:45):
Oh, yeah, okay, so just just beat at each other.
Speaker 9 (56:48):
Yeah.
Speaker 19 (56:49):
The first modem was the acoustic Coppler. Incredible device honestly.
But then if you I guess, if you're my age,
you started with fifty six kit modem and you would
go to a website and like there would be an
image and it would it would scroll like slowly, it
(57:10):
was loading pixel by pixel on the screen.
Speaker 9 (57:12):
So that's what it's like to be bandwidth limited.
Speaker 19 (57:15):
Now I imagine using the current Internet with that same modem.
It's like it's inconceivable, it would be impossible to do.
Speaker 9 (57:23):
So what broadband Internet did to.
Speaker 19 (57:26):
The fifty six kilo bit modem is what this hardware
is going to do to the brain. We're trying to
drastically expand the amount of bandwidth that you have access
to to have a much richer experience and superhuman capabilities.
Speaker 5 (57:46):
So I guess, just to kind of close out and
to recap today, neuralink is working reliably and has already
changed the lives of seven participants and making a real impact.
And our next milestone is to go to market and
enable scaling of this technology to thousands of people, as
(58:07):
well as expand functionalities beyond just the movement to enable
robotic sophisticated robotic arm control, speech vision, give site back,
and even.
Speaker 6 (58:18):
Getting to the speed of thought.
Speaker 5 (58:20):
I hope you got a good sort of sample of
our technology stack and the challenges that we have. And
I like to handle over the mic to Elon for
any closing remarks.
Speaker 1 (58:33):
Well, we're trying to give you a sense of the
depth of.
Speaker 2 (58:38):
Talent at your link.
Speaker 1 (58:39):
There's a lot of really smart people working on a
lot of important problems. This is one of the most
difficult things to to actually succeed in creating and have
at work and work at scale and be reliable and
available for millions of people at an affordable price. So
super heart problem and we'd like to have you come
(59:03):
join and help.
Speaker 2 (59:04):
Us solve it. Thank you.