Episode Transcript
Available transcripts are automatically generated. Complete accuracy is not guaranteed.
Speaker 1 (00:04):
Sleepwalkers is a production of I Heart Radio and Unusual Productions.
Jan had become paralyzed through a disease process such that
she couldn't move anything below her neck, and she volunteered
to have the surgical procedure in which we implanted electrodes
(00:25):
in her head and we were able to decode the
signals from her brain and she was able to move
a high performance prosthetic arm in hand. That's Andy Schwartz,
a professor of neurobiology at the University of Pittsburgh. He
helped build the technology that allowed a patient called Jan
Sherman to control a robotic arm with her mind. In
(00:48):
order to do that, Andy's team needed to give the
arm away to know where Jan wanted it to move.
They had to read signals directly from her brain and
teach a computer to understand them. So they built a
brain computer interface a b c I. We implanted electrodes
in her head. You have to put on these bulky
connectors with thick cables going to a bank of amplifiers
(01:08):
and computers. It sounds like something out of a science
fiction film. But once Andy could see Jan's neurons firing,
reading her intentions was simpler than you might think. What
we found is that the rate that these neurons fire
is related to the direction that the arm moves. When
you add their signals together, you can get a very
(01:29):
precise representation of that movement. It's a very very simple algorithm,
and it's like listening to a Geiger counter um and
each click of the Geiger counters the same, but as
you get closer to a radioactive source, those clicks come
closer together. Listening for those clicks was the breakthrough necessary
(01:50):
to translate Jan's thoughts into signals that the robotic arm
could understand, and this meant that Jan could move again.
She could reach out and touch her husband, and Andy's
team filmed Jan using the arm to complete a more
playful goal that she'd set to feed herself chocolate for
the first time in ten years for a woman, one
(02:16):
giant fight for DHDR. Jan was able to do that
with the robot, and not only that, but she made
graceful and beautiful movements. That's what really blew me away.
It looks much like a real arm in hand, and
for someone who studies movement, it was really quite beautiful.
(02:40):
Andy is a scientist and a researcher not given to
being sentimental, but the way he talks about Jan moving
her robotic arm, it's like describing a dance a ballet.
It's one of the most inspiring examples of humans and
machines working together in concert that we've come across in
all of our reporting for Sleepwalkers. But it also raises
(03:00):
profound questions about the future of our health, our bodies,
and our society. What are the implications positive and negative?
As AI makes us ever more able to decode complex
systems like the brain or the human genome, how is
AI poised to change the world of medicine. I'm as
Veloshin and this is Sleepwalkers, So Kara I found Andy's
(03:36):
story completely mind blowing, no pun intended. You know it
shakes one of the last remaining mysteries. For the most part,
neuroscience is still very much a black box problem. We
know what's happening in the brain, but neuroscientists can't always
know why and how, which is a lot like the
problem of black box AI. We are moving quickly into
(03:57):
a world where sensors can read us better and better
know they can read pupil dilation, carbon dioxide in the
breath to understand people's emotions. Yeah, this is the stuff
we talked about an episode four around using AI to
better read biometrics with Poppy Chrum. The difference being in
this case is that Andy isn't monitoring the outside of
our bodies, so the privacy concern is less. To read
(04:18):
Jan's brain, he had to drill into her skull and
place electros into the surface of her brain and then
connect them to a computer, so that's unlikely to creep
up on you. Google is not going to be doing
that to get geolocation, not yet. Because that said, and
he told me one of his big priorities is trying
to figure out how to achieve the same effects without
(04:40):
needing invasive surgery, so that people can use this technology
at home. Talking about reading the brain, one cutting edge
application for AI is to restore language. A few months ago,
I was actually reading this paper in Nature, and I'm
not sure all podcast listeners read Nature. Well, thanks, Carol,
you do the hard work so that we don't have to.
I guess why might they pay me the big bitcoin um?
(05:01):
But you know, basically this article was about decoding the
human brain to produce language based on how the brain
tells the mouth to move well. Funny enough, this is
something I spoke to Andy about a few months before
your paper and Nature was published, Kara, and he was
talking about exactly this. So if you record from motor
(05:21):
areas associated with producing language, you can start to recognize
certain words and phrases. Even I think that's realizable in
the your term. What Andy is saying is that to
create spoken words directly from the brain, we don't actually
need to read thoughts. We can just look at the
last step, the moment when thought becomes action as neurons
(05:42):
fired to move our tongue, our lips are jaw to
create sound. And just like with jan, an algorithm can
listen to those neurons and allow thoughts to become actions.
And this could transform lives. So if you could start
to recognize words and language from brain activity, that would
(06:02):
be very helpful for people who are locked in with
a l S and can't communicate. We talked in the
last episode about how many technological breakthroughs have come out
of DARPER. That's the branch of the Defense Department charged
with inventing technological surprises, well neural prosthetics or robotic limbs
controlled by the mind have been an area of heavy
(06:23):
investment for the agency. You may remember Gil Pratt from
earlier in the series. He's now the CEO of the
Toyota Research Institute, but previously he was at DARPA where
he worked with none other than Andy Schwartz. So he
was involved in a project at DARPA which was called
Revolutionizing Prosthetics. And the project that I started was to
(06:45):
see if we could actually help some of the experimental
patients that he had to perform even better. So Kara
I was interviewing Gil Pratt about his work at Toyota
and on self driving cars, and we got talking about
his interest in these human machine owner ships, and I said,
let me tell you a story as a scientist called
Andy Schwartz and his patient Jan and Gil was like, um, yeah,
(07:08):
I worked on that. Yeah, no, no no, no, literally, it's funny.
And it also shows the long arm of DARPA intended.
Again that's far away from military applications. You know, darp
AS interest in prosthetics is actually because of veterans, many
of whom lose limbs on the battlefield, which highlights again
(07:29):
the dual use nature of so much innovation. Right, um
and The work Gil was doing at DARPA with Jan
was all about doing about a job of interpreting her
brain waves using the existing models. My group made a
system called arm assist that watched what Jan was trying
to do. It inferred Okay, now she's trying to pick
(07:49):
up a block. Now she's trying to move it over
to the left. Now she's trying to drop the block.
You know, in a way very similar to if you
use power Point and you have like snapped to grit
or snapped the object turned on, it will help you
move the mouse to where it thinks you want to go.
This system helped her move the arm to where it
inferred she wanted to go based on a very noisy
(08:11):
signal that was coming out of her brain. That noise
in the signal was partly because the connection between Jan's
brain and the decoding computer was weak. So Gil's team
Adoppa designed a program to boost Jan's intentions. We tested
her by randomly turning the assists on and off. And
you can think of the assist as a guardian, like
(08:32):
what we're developing for the car to stop you from
having a crash, and we had the guardian in this
case help as little as possible, but still be effective
at helping her to reach the goal. The amount of
help that we gave her was so low that she
couldn't tell whether it was on or off. But when
Guardian was turned off, Jan's success rate fell. And in
(08:55):
a human machine partnership is not just the computer that
adapts to the human brain also adapts to the algorithm.
And the's algorithms are simple and they rely on human
programmed rules. So when they see cause in this case,
enough neurons firing at the same time, they create an effect,
a movement. But what's tantalizing is that effect that output
(09:19):
hints at something far deeper and infinitely complex personality. I
like Moby Dick where he talks about Captain Ahab walking
on the deck of the ship, and by observing his movements,
you can really understand what he's thinking about. If you
think about movement, it's really a communication between your innermost
(09:39):
thoughts in the outside world. Andy and his team saw
this come to life before their very eyes when they
connected another subject, Nathan, to a neural prosthesis. The robotic
arm moved in accordance with the personality of the user
Jan when she moved. She was very careful and gentle,
and Nathan is more are of a video gamer. He's
(10:01):
a younger guy. He's more of a competitor, so he
would move much faster, and so he would pick up
an object and then instead of placing it carefully in
a receptacle, he would basically toss it into the receptacle
to be faster. Today we can see the difference in
Nathan and Gen's personalities from how the brain controls a
prosthetic arm. We can infer personality from output, but we
(10:25):
don't even have sophisticated enough tools to ask why. As
we get better and better at these computational approaches, will
gain a much better understanding of the way the brain works.
Rather than having some major event causing some simple consequence. Instead,
it's more like a perfect storm where many factors come
(10:46):
together to generate a consequence. And if we understand which
of those events are important and how they're combined, then
we can start to understand brain function better. If you
see a cup and you're thirsty, if we could realize
that what you want to do is to drink from it,
(11:07):
then we could understand that you want to grasp the
cup from the side, and if we could distinguish that
from you grasping a cup with the intention of passing
it to me, you might hold it now from the top,
then we could do a better job generating the correct
movement for Andy. The next frontier is more deeply understanding
the human brain carrot by developing new models and better algorithms.
(11:31):
And that's really a computer science problem as much as
a medical problem. And then really the next frontier in
medicine is all about using AI to decode complex interactions.
And in fact, you reported a piece on exactly this phenomenon. Yeah,
I spoke to a woman named Regina bars Ali. She's
a computer scientist m I T and her own experience
(11:51):
of how she was diagnosed with breast cancer actually inspired
her to work on bringing AI into the realm of medicine.
When we come back, we'll here from Regina and take
a look at other ways AI is changing diagnostics and
the future of our health. I remember still, I went
(12:15):
to mamogram and they told me a high density but
you shouldn't worry. Half of the population have high density breasts,
so don't worry about it. That's Regina bars Lie. Regina
is a professor of Electrical Engineering and Computer Science at
m I T. I couldn't believe in it. I didn't
feel anything. There was nothing wrong, you know. I was
continuing my morning runs and being fine. It was later
(12:37):
on after her mammogram that Regina found out she had
breast cancer, and as it often does, the news came
as a surprise. And if I'm looking at myself, I
really cannot explain what was wrong that I got this
disease that clearly didn't have any family history. I'm exercising,
I'm meeting healthy and for many many patients that I'm
(13:00):
me during my own journey, UH, their diagnosis came to
them as the biggest surprise. The thing is, according to
current medical standards, breast cancer risk is based on a
few factors. Are you a woman, and are you old?
Do you have the Brocko gene? And do you have
breast cancer in your family? But those are relatively simple
inputs and they don't account for what complex systems we
(13:21):
are above eighty percent of women who diagnosed with breast
cancer there first in their families, so it's not clear
you know what causes it. According to the Susan Becoman
Breast Cancer Foundation, breast cancer is the most common cancer
amongst women around the world. Every two minutes a case
of breast cancer is diagnosed and a woman in the
(13:42):
United States, and every minute somewhere on earth a woman
dies of breast cancer. That's more than four hundred women
per day. So if you're listening to this podcast, you
probably know someone who has been or will be diagnosed
with breast cancer in their lifetime. And with that many
people affected that they are bound to be some oversights,
like in Regina's case. I discovered that my own diagnosis
(14:06):
was delayed because the malignancy was missed in the previous mammograms.
And I also discovered that this is not a unique experience.
So the question I asked is is it possible for
us to know ahead of time what's to come? In
other words, can we look into the future prior to
our diagnosis. Regina had already been a long time computer
(14:29):
scientists at M I T. She thought often about machine
learning in her work, but this new personal hardship redirected
her thinking. When you go to the hospital, you see
like real pain of other people. You see people who
go through key most radiation, even though the hospital just
want stop away from M I T. I just was
(14:49):
not aware there is so much suffering. And at that
point when I came back, I was thinking, we create
so much you know, exciting new technology it and I
see why I we are not trying to solve it.
It's it's just a travesty. And so Regine and her
colleagues that M I T began training a deep learning
(15:09):
model on over ninety thod mamograms from mass General Hospital,
and with such a large data set, they were able
to predict a patient's risk of breast cancer by comparing
one mammogram to tens of thousands of others instantly. My
firm belief was that despite the standard risk factors, there
is a lot of information in women's breast. Human eye
(15:31):
which even seen you know, thousands or tens of thousands
images over their lifetime, may not be really able to
detect it with a great clarity. However, if machine which
is uh you know, trained on sixty images where it
knows the outcomes, it can identify the difference in pixel
distribution that I likely correlate to a future things that
(15:57):
may come. The amount of data which you can train
a computer on versus a doctor is massive and the
AI was able to detect smaller details than the human
I could pick up, so this does feel like a
perfect application for the strengths of machine learning. Fifteen months
ago we put first our density model, which is every
(16:19):
mammograms that goes through months general it shows a prediction
to the radiologists and around of the times radiologists degree
with the machine, and when they disagree, the more experienced
radiologists typically sides up with the machine. So Regina's early
efforts are doing really well, and the hope is that
more hospitals around the country will begin using these models
(16:41):
for early detection and risk assessment. And that's not only
because understanding risk is important, but also because misdiagnoses have
led to unnecessary surgeries. I read this book The Emperor
Whole Melodies. It's a book about comes, so I really
like let me just say one particular moment it really
choices me out. It's about how men, male surgeons who
(17:04):
are treating women with the surgeries this firmly believe that
the more you cut out of you know, women's body,
the beta is w likelihood. In the early twentieth century,
there was an insurgence of doctors performing radical mastectomies, removing
the entire breast, thus permanently disfiguring a patient, and radical
(17:25):
mastectomies became the norm for much of the twentieth century.
The reasoning was that removing a lump leaves the risk
of tumors growing elsewhere, so why take the risk. By
the nineteen eighties, it was clear that radical mastectomies weren't
actually in effective treatment for many patients, but according to Regina,
surgeons still remove too much tissue out of an abundance
of caution. The reason it happens, it's not because there
(17:48):
is some evil doctors that wants to look another surgery.
It's just because people are uncertain and it's high risk,
and many times conceptician would say, I am ready to
go for the harshest treatment to minimize that chances. So
what we demonstrated that with machine learning you can actually
identify a city percent of the populations it doesn't need
(18:09):
this type of surgery. Regina told me that had these
deep learning models been in place at the time of
her early mamograms, she might have detected her risk two
years sooner. And in many cases, early detection makes a
big difference in how a patient chooses to treat their cancer.
Since developing the breast cancer detection model, Regina is now
co leading m I T S J Clinic, which is
(18:31):
a new initiative focusing on machine learning and health. What
I hope that we as a society advance since then
and we are ready to bring you know, the recent
science and help women, and even if it means that
we need to change our beliefs about how risk assessment works.
(18:52):
And Regina hopes that as a society we can move
toward greater acceptance of using machine learning to enhance medicine.
Whichever one does a bit, Joel, that one should prevail.
So big question, how do you feel, Kara about putting
your health into the hands of an algorithm? You know,
after speaking to Regina, who told me that she probably
(19:15):
would have been diagnosed two years sooner using her models,
it seems as though machine learning provides this unparalleled form
of detection right because the most seasoned doctors simply can't
compare thousands of data points at once. I don't think
the issue is algorithms replacing doctors. It's more a matter
of equipping doctors the sharper tools that they can do
(19:37):
their jobs. They've got to provide a patient with information
and allow that patient to make informed decisions. Algorithms don't
have a bedside manner. You know. What you say about
the shop of tools reminds me of our conversation about
creativity and episode two, using AI to give artists, musicians,
screenwriters new tools to do better work. At the same time,
(19:58):
just like the art world, the medical profession sits on
this enormous pedestal where we have to trust what they
say because most of us don't have the tools to
question them, right unless you're on WebMD. Yeah, the bane
of every doctor's life. Yeah, you know, doctors have a
hard enough time explaining medicine to patients. Imagine them having
to explain artificial intelligence. Well, from experience, we can say
good luck to him. What's crazy to me actually is
(20:21):
that Regina and her co author, this woman, doctor Connie Lehman,
who's a radiologist at Mass General, were rejected from every
single federal grant they submitted at first. Why would they
be rejected from all those federal grants? Because as much
as there is a ton of buzz surrounding AI, I
think people have to appreciate how new this frontier is right,
and using machine learning to make predictions about people's cancer
(20:44):
is very, very new, and it's going to take doctors
a really long time to learn how to convey this
information to patients. So that's where Regina is right now,
figuring out how doctors can explain to patients, Hey, AI
helped us determine your cancer risk. Well, part of problem
is that we don't yet have explainable AI. So it's
not just that it's hard to explain it to patients,
(21:05):
it's actually a black box. You may remember Sebastian Thrun,
the founder of Google X from earlier in the series.
As well as self driving cars and flying cars. He
works on medical diagnostics and he recognizes a problem. One
of the conundrums of machine learning is that when you
open up in doing that brook you look at like
(21:25):
hundreds of millions of numbers, but he can't quite understand
what's happening. So people are are concerned. People look at
that Brooks say, add the thing of diagnals and cancer,
what does it do. We've talked about the black box
problem in AI and how hard it is to trust
decisions that can't be explained, but Sebastian is quick to
point out that human beings can also be difficult to decipher.
(21:47):
Let's remind ourselves our doctors are also black boxes. You
can't open up the brain of your doctor and ask
what was this here she using for diagonals and cancer.
It's a fair point, and it's one has also been
noted by Siddathan Kache. He is one of the world's
foremost cancer doctors and the Pulitzerprise winning author of The
Emperor of All Maladies. That's the book that Regina referred
(22:09):
to earlier, and Siddartha has also written extensively about how
AI is changing medicine. I've been a huge fan of
his work for a long time and actually ambushed him
after a talk he gave in order to persuade him
to an interview for this podcast. So thinking about AI
helping diagnose patients, it's worth asking is the human doctor
so very different? One problem that I think is fascinating
(22:32):
is when a patient comes into the hospital. If you
ask a particularly astute physician, that physician can actually describe
to you what the most likely journey of that patient
will be in the hospital, whether they're likely to stay
for twenty five days suffered through bacterial stepsis, you know
all from peeking in through the door of in emriency room,
Siddartha started to wonder how doctors make those lightning fast calls,
(22:54):
and it got him interested in understanding what the brain
is doing when a doctor makes a die ignosis. We
actually understand very little about how human beings make diagnoses.
I mean, the studies that have been done so far
suggest that most people make diagnosis in a kind of
recognition sense rather than an algorithmic sense. The classical description
(23:15):
of how we make diagnosis was extraordinarily algorithmic, sort of
goes down a series of elimination. It's not this, it's
not that. Now. Whence the alto says algorithmic, he doesn't
mean using a computer algorithm. He means using rule based logic.
If this, then that, et cetera. But what he learned
was that despite what the textbooks say, that's not actually
(23:36):
how doctors make a diagnosis. When you put doctors inside
MRI machines and ask the question how do they make diagnosis?
In fact what lights up is parts of the brain
that are much much more to do with pattern recognition.
Here's a rhinoceros, Here's not a rhinoceros. Here's an elephant,
here's not an elephant. Especially mature doctors make diagnoses based
(23:58):
on patent recognition, and they'll flip around like moths around
the flame and ultimately slowly arrive at the target. It's
much more geographical way of thinking rather than linear. They're
using a combination of Bayesian or prior probability understandings. They're
using pattern recognition. They're understanding things about the patient and
figuring out what to do. Hearing Saddatha speak about doctor's
(24:24):
cara in terms like prior probability understandings and Bayesian statistics
really does make it sound like he's describing AI rather
than people. Well, it kind of is. You know, neural
networks are purposely modeled on the human brain. It's not
as easy as causing effect. It's about drawing on a
lifetime of experience to make best guesses based on competing
information that we have to weigh appropriately in micro second.
(24:47):
It's no easy task. It's funny because we've worn several
times on this series that we shouldn't be surprised when
our creations reflect us, and yet it's almost impossible not
to be. I feel a sense of uncanny chills when
Saddatha is gribes a human doctor working like an algorithm.
And he wrote about this in The New Yorker with
the headline AI versus m D And he made this point,
(25:09):
which is that human and machine processes of making diagnosis
are converging. And it makes me wonder who's going to
have the final word. Well, I asked Sebastian thrun exactly
that question. He will die of cancer a lot. I
believe many of those deaths is actually preventable using artificial intelligence.
It's amazing how diverse diagnostics you get. When you show
(25:32):
a set of dermatologists the same set of images, some
will say cancers as I would say five. And Sebastian
has a personal interest in the topic. My family, unfortunately
has a long, long, long history of cancer. My my
sister passed away last year. My mother passed away a
young age. So one of my questions I had in
my life with me is since my mother died, um,
(25:56):
maybe we should not work on on treatment. We should
really focus on detect not diagnostics. Diagnosis of skin cancer
doesn't require looking inside your organs. You can just look
at the person from outside and it turns out we're
not heavenal symptoms. Before it becomes dangerous, it sits therefore
quite a vie. It grows below your skin, it spreads,
and then it destroys your liver, and then your first
(26:16):
symptom might be that back pain or a yellow face.
Maybe we should just look every single day. In fact,
Sebastian his work to make it possible for people to
check themselves every day. He published a paper in Nature
called Dermatologist level classification of skin cancer with Deep neural Networks.
What he demonstrated is that a program that runs on
(26:38):
an iPhone performs just as well as dermatologists at diagnosing
skin cancer. It sounds transformative, but Siddhartha has a very
specific concern about this kind of technology entering the mainstream.
Well Over diagnosis is an important risk. A classic example
of that is a lesion in the breast, a spot
(26:58):
that is actually not reast cancer, but it's picked up
and described as breast cancer, that leads to a biopsy,
the barbsy leads to complications and so forth, and at
the end of it you discover that that you know
you've achieved not very much, except for subjecting a woman
to an unpleasant procedure with unpleasant costs. We don't want
to catch just early cancer. We want to catch the
early cancers that are likely to kill you, the other
(27:20):
ones that are unlikely to become anything. We actually want
to be able to reassure patients that they don't need
a biopsy. Regina told Kara how screening enabled by machine
learning could reduce unnecessary mystectomies, But according to Siddartha, we
also need to be cautious about overscreening pushing us into
unnecessary procedures. And as AI and sensors become more ubiquitous,
(27:44):
enabling us to constantly search for illness, there may be
psychological implications that were not fully prepared for. This is
the very or valiant notion of previvors. It's the word
that I first encountering clinic in it was a woman
who had brack of one mutation, but in fact did
not have any breast cancer. She called herself a pre
vivor of breast cancer. She was a survivor of a
(28:05):
disease that she yet did not have. Our culture hasn't
reached the place that you know, we're routinely thinking of
ourselves as previvors. But it has reached a place where
surveillance is is constant. You know, you're moving from colonoscopy
to mammogram to p s A test, to medical exam
to retinal exam. And you can imagine stringing together with
(28:26):
future devices, a culture in which the body is always
being hunted scoured for being a potential locus of future disease,
and that will I think distrought culture fundamentally. It's a
very Orwell and very scary idea, said Arthur Ludes. Of course,
Sir George Orwell, whose novel four was prescient about the
culture of surveillance that's now blooming around us. But I
(28:47):
never thought about surveillance in medical terms before. And who
might be surveilling our bodies. One might be a health
insurance company or the government interested in, you know, who's
healthy and who's not healthy. There was a chance meeting
between Siddartha and Sebastian that God Siddartha thinking about AI
and medicine, but the two have fundamental disagreements on the
(29:08):
risks and rewards of surveiling the body. I love Sid
as a person, but I can tell you any doctor
who tells you less data is better for you is irresponsible.
If I could give you information with your skin cancer
every day, you will live longer than if it's just
consulted avntologies every year or two. But also the unpredictability
(29:29):
of death is part of the human experience. Our culture
would be very different if we walked around with signs
on our foreheads which told us the number of days
that we had left to live. What Siddartha is describing
is not some thought experiment. Using AI to predict time
of death is fast becoming a reality. But what inputs
does it use? And how am I knowing when we
(29:50):
will die? Change our culture? Join us after the break.
Doctors are actually very poor at predicting death. If you
look at the pattern of how people die, most people
don't decline along a predictable path towards their death, so
(30:14):
it's often a series of strings that snaps. If you
think about the human being being held together like a
puppet on on many strings, it's not that the puppet
slowly crumbles at a predictable pace. It's that all of
a sudden, three strings collapse and the hand comes dangling
down and the body and medicine tries to prop that
piece up, and in doing so now two more strings
(30:36):
get cut, and the and the foot collapses, and when
a certain number collapses that nothing can be done. So
it's a fundamental failure of homeostasis that makes death very
hard to imagine, conceive, And of course there is an
emotional component to this, but unlike human doctors, AI doesn't
get distracted by emotion. It looks at evidence and historical
(30:57):
data to establish patterns. The algorithms actually do quite well
in predicting death. What is it attaching weight to? Is
it a combination of things? Is it the fact that
someone has a brain metastasis and has a slight rise
in some blood value of some solid that predicts that
this person is likely to do very badly in the
next few days. You know, as you refine it further
(31:20):
and further, many subtle things might start coming up that
we don't know about, and those will be the most
interesting ones. It's not just additives. That phrase. It's not
just additives. It's important to me, Kara because it connects
the dots between what sadd Arthur is saying about predicting
time of death and what Andy Schwartz was saying about
getting better decoding the human brain. They're both about understanding
(31:42):
systems where one plus one doesn't necessarily equal to where
unexpected results emerge from complex systems. Thinking about this makes
me physically ill. Um He's literally talking about predicting when
we will die and mind blowing. You know what, if
you could know when you will die could change how
we choose to live our daily lives. It's one of
(32:03):
the things I find most disturbing this whole series. So
much of how we live and how we aspire and
what we hope for is connected to our uncertainty about
when we're going to die. That and I can change
all of that. It's part of this more global line
of thinking, which is that AI kind of takes the
fun out of things well, I mean, and it also
a big part of Western culture is the Bible and
(32:25):
the fruit of that forbidden tree, and in Paradise Loss,
there's this warning to know, to know no more, and
there's always been this idea in history that there's something
magic about not knowing. Yeah, you know, Milton aside, Okay, okay, okay,
Nat you go. We can start to think about what
this might mean more practically. Lifespan data is directly linked
(32:46):
to life insurance policies. You know, this would significantly change
how much people pay. You also think about personal injury law,
which takes into account how long somebody is going to live,
so you can determine how much money they should get
for loss of quality of life. These are things that
would be greatly impacted by people knowing when they're going
to die. Right. So, Datha calls death the ultimate black box,
(33:08):
which I think is actually the perfect description of black box.
We know we will die, we just don't know how,
and we don't know when. And with AI it's similar.
We know we'll get a result in endpoint, but we
don't know exactly how our input factors are combined to
get there. Ironically, although AI itself is a black box,
it's helping us unpack other black boxes, death being the
(33:29):
ultimate one, the brain being another. And then there's the
human genome, the unique pattern of our DNA that makes
each of us us. And we've met the genome, but
there are a lot of concerns about decoding it, that
it might be a sort of Pandora's box. Well, there's
that story about the scientists in China who edited the
genes of two babies using crisper and all the ethical
(33:51):
concerns of creating genetic babies exactly. So I spoke with
Andy Schwartz about actual progress being made in decoding the
human genome. If you go back to the late ninety nineties,
and the race was on to discover the human genome,
and the sound bikes were as soon as we understand
all the genes, we can cure disease. So, for instance,
(34:13):
there was a breast cancer gene, and there was an
Alzheimer's gene, and if we just knew what those genes were,
we'd be able to eradicate these diseases. Well, it's been
twenty years now and we're just beginning perhaps to get
some sort of genome based therapies that might address some
(34:33):
of these And what we found is that there's no
simple cause and effect. Very rarely are there simple gene
defects correlated disease. Rather, these diseases have hundreds of genetic
basis and each of those is relatively weak, but combined together,
they generate these diseases. And so it's becomes a computational
(34:55):
problem and we start looking again at this as a
complex system where causality is no longer clear. And these
are the complex computational problems. We're getting better and better
at solving Sadartha himself is interested in exactly this area,
the confluence of genetics and computation. In fact, in twenty
eighteen he gave a talk at Vanderbilt University called from
(35:17):
Artificial Intelligence to Genomic Intelligence, and it's an area where
we're making rapid progress. The first papers are just starting
to appear there in preprint. One of them is extraordinarily interesting.
It appears to be able to predict height based on
an algorithm and genomic information to all parents tend to
produce to all children short parents and to produce short children.
(35:39):
But we did not have ways to predict based on
genetic information what your actual height was going to be.
The question becomes, well, how do you take the genome
and out pops height from it? If you can do that,
that means you can take a field genome in utero
and predict this person's future height. You know, based on
these first few papers that I read about this arena,
you require deep learning to do this. I wanted to
(36:01):
understand why this prediction of height requires deep learning algorithms,
not simple ones like Andy used to interpret Jan's brain waves.
It's not just additive you can just add up across
multiple variations in the genome and arrive at a risk score.
It's that there are interactions between genes that have to
be captured. Again, these are early days for artificial intelligence
(36:23):
being unleashed on genomes, but it seems to me that
complex problems of genetic architecture will soon be predictable using
these kinds of algorithms, and that ability to predict raises
huge questions for all of us. If you want to
know the height of your unborn child, but you want
to know the risk of dyslexia, those questions are almost
certainly likely to lead to extraordinarily acrimonious public conversations about
(36:47):
what should be done and what shouldn't be done in
terms of accessing the data, who's to store the data,
how much privacy we should have about it, and how
much it will distort human culture to have these pieces
of knowledge. Um So, if you think that clinically going
when you're going to die is is going to distort culture,
in knowing how tall your child is going to be
in the future will also distort human culture. We haven't
(37:08):
ever lived in a place or a space or a
time when that knowledge has been predictable from a fetus.
Artificial intelligence is giving us incredible power to see into
the future, to ask and answer questions about the generations
to come. But it is up to us, our generation,
to decide how we want to use this awesome power.
(37:29):
But one thing that artificial neural networks can't do is
defined principles. They can only work on classifying things that
we tell them to classify. There is still a human
telling the artificial neural network what it should be doing.
There is something very fundamental about the human brain, a
scientist's brain, a doctor's brain, and artist's brain that asks
(37:52):
questions in a fundamentally different manner, the why question. Why
did this happen in this person in this time? Why
does the melanoma appear in the first case? What is
the molecular basis of that appearance? The most interesting mysteries
of medicine remain mysteries that have to do with the y,
and despite being at the absolute cutting edge of medical research,
Saddatha's most important guiding principle was written in ancient Greek
(38:15):
over two thousand years ago. Remember the Hipocratic oath begins first,
do no harm it's maybe the single profession where the
oath of the profession is in the negative. And this
is for a reason. It's for a profound reason in medicine,
because we're intervening on bodies, because we're intervening on homeostasis,
because we're intervening on cultures. Effectively, the capacity to do
(38:36):
harm arises very quickly, and so the first do no
harm injunction in the Hippocratic oath is is an important
thing to keep in my own mind. Um, you know,
what are the harms that arise if I were to
start knowing my risks of future disease, not just what
advantages would I get in society. And this battle is
happening in my mind, I assume in the minds of
(38:56):
virtually every doctor. As we move forward into this uh
beautiful and perilous future, in a world where our choices
can create new beauty but also a new peril. It's
important that we move into that future with real care.
And it's something sad Arthur has thought a lot about personally, because,
like Sebastian Throne, he has a family history of heritable conditions.
(39:19):
The risk is of schizophrenic disease and bipolar disorder, and
right now the algorithms to predict this still don't exist.
As the project of sequencing lots of genomes and asking
what diseases people have matures, this data set will become
available maybe five ten years from now. I will be
past the period I suppose where that will make a difference.
(39:42):
But to my children and my grandchildren, it might make
a difference, and they'll have to make that decision. I
will advise them individually, and it will depend on humanistic
understanding of what an individual's desire to understand their own
risk is. There's no algorithm that predicts that understanding. As
AI advances, we're being faced with more and more urgent
(40:03):
ethical choices. This, in turn, may put a new emphasis
on the humanities, or, as Kai Fuey suggested, place a
new premium on personal attention, human interaction, and emotional care.
Once we give up some of the diagnostic pattern recognition
material to machines, it will be time to play. It
(40:24):
will be the time to play in the arena of
human therapeutics, human biology, the complexity of the human interaction,
the art of medicine. My hope is that medicine, being
more playful, will become more compassionate, more able to take
into account individuals and their individual destinies rather than bucketing
people in big categories. It means having more time to
(40:47):
spend with humans. You know, we are so constrained by
time that even compassion gets three minutes, we won't become
more robotic, will become less robotic as the robots and
our own What's the data describes is the holy grail
of the AI revolution. Could it allow us to be
(41:07):
more human, to be better doctors, more fulfilled workers, and
greater artists. Could it take routine work out of our
hands and allow us to take better care of each other.
It's a compelling vision, but as always, it has a
dark side. While most doctors are guided by their hippocratic
oath do no harm, there's no guarantee that new technologies
(41:30):
will stay in the right hands. The line between healing
and upgrading our bodies is thin and contested, and as
AI improves, we can begin to translate desires directly from
brain activity, modify the physical traits of our children through
gene editing, and accurately predict when we will die. In
the next episode, we ask what does all of this
(41:52):
mean for our future? As a species. We speak to
the world's leading thinker on these questions. You've all know
her are author of Sapiens and Homodaeus. I'm Ozveloshin. See
you next time. Sleepwalkers is a production of I Heart
(42:19):
Radio and Unusual Productions. For the latest AI news, live interviews,
and behind the scenes footage, find us on Instagram, at
Sleepwalker's podcast or at Sleepwalker's podcast dot com. Sleepwalkers is
hosted by me Ozveloshin and co hosted by me Kara Price,
with produced by Julian Weller, with help from Jacobo Penzo
(42:40):
and Taylor Chakoin mixing by Tristan McNeil and Julian Weller.
Our story editor is Matthew Riddle. Recording assistance this episode
from Joe and Luna to Brina Boden and Joseph Friedman.
Sleepwalkers is executive produced by me Ozveloshin and Mangesh Hattikiller.
For more podcasts from My Heart Radio, visit the I
Heart Radio app, Apple Podcasts, or wherever you listen to
(43:00):
your favorite shows.