Episode Transcript
Available transcripts are automatically generated. Complete accuracy is not guaranteed.
Speaker 1 (00:14):
Welcome to tech Stuff. This is the story. Each week
on Wednesdays, we bring you an in depth conversation with
someone who has a front row seat to the most
fascinating things happening in tech today. A conversation with David Spiegelholter,
a professor of statistics at Cambridge an author of the
(00:35):
Art of Uncertainty, How to Navigate Chance, Ignorance, Risk and Luck.
Spiegelhlter has devoted his life to understanding uncertainty. After all,
it's one of the most uncomfortable aspects of being human,
particularly when it comes to our health. Since the nineteen seventies,
Spiegelhulter has worked on algorithms to assist clinicians and patients
(00:58):
make better decisions about what treatment options to take for cancer,
and he has a deeply personal understanding of the topic.
In nineteen ninety seven, he lost his son Danny to
cancer at the age of just five, and the epigraph
to David's book is a quote from the Bible. The
race is not to the swift, nor the battle to
(01:18):
the strong, nor bread to the wise, nor riches to
men of understanding, nor favor to men of skill. But
time and chance happened to them all of course, with
the explosion of AI, we now have better and better
tools to help us understand the world and make informed decisions.
And in fact, Spiegelhalter was an early pioneer of the technology.
(01:41):
So that's why I decided to start our conversation. We
live in this extraordinary moment where technology seems to be
giving us more of a view around the corner into
the future than it ever has. How has the march
of technology impacted your work and your understanding these topics
over your career?
Speaker 2 (02:02):
Oh a huge amount. I mean, we needn't get into
the whole Asian methodology, but that's what I was interested in,
and we couldn't do it because you just couldn't do
the calculations. You couldn't do the maths. But instead of
trying to do the maths, you just used brute computing
force to simulate millions of different possibilities and look at
their distribution and the algorithms we knew would converge to
(02:24):
the correct answer if they ran long enough. You had
to wait till nineteen ninety or so. Just before that
technology that ability to do such huge simulation exercises was
on everyone's desktop, and then there was an explosion and
a complete change in the way statistics was done. Up
to then people did clever maths and then programmed that
in and it changed into no, we don't have to
(02:47):
do any maths. We just have to program in the problem,
the model that we're trying to solve, present the data
to it, and send it off. And wait. But what
I think you might be starting to allude to, which
I'm sure get onto, is the role of AI. Our
AI is already changing my I used a lot in
writing the book, both in the researching and summarizing of
(03:10):
literature and of course in the coding all the time.
You know, I always rewrote everything, but I used it
a lot. I use it all the time in my
daily word, daily life. But actually, will it be able
to make predictions? And I am rather skeptical about claims
both that are sort of you know what, you might
call it a global level, or a social level, or
(03:30):
even at a personal level, about our health, about the
ability of AI to make predictions.
Speaker 1 (03:36):
Your book has the epigraph from the Bible. How did
you come up with that? Oh?
Speaker 2 (03:40):
By using AI to ask for quotes that use chants
and things like that.
Speaker 1 (03:44):
Is that true?
Speaker 2 (03:45):
Yeah, I'd actually for that one, I knew that one,
but otherwise so yeah, I use AI.
Speaker 1 (03:49):
Well why that one? Why was that the first one
that you used? Oh?
Speaker 2 (03:52):
I think because the whole book, and especially if we're
the first chapter, which is about my grandfather, was intended
to give the idea of the utter lack of control
we have in our lives, and we have an illusion
of control, which I think actually is not helpful. I
think to realize that how little control we do have
in our lives, how much of what happens to us
(04:13):
is what, for want of a better word, we might
call chance. In other words, events that will happen to
us that were unpredictable and that you know, and that
we had no control over. I think that's rather important
to realize that. And because the word that appears in
the book more than almost any other is humility. There's
almost no mention of the word rationality in the book.
This is not a book about being rational, It's a
(04:36):
book about being humble.
Speaker 1 (04:37):
He mentioned your grandfather, and in the book you talk
about how he survived various battles in World War One.
You'll talk about your mother being captured by parents in
the South China. See exactly and they're making it to
the UK, where you should meet your father.
Speaker 2 (04:52):
Yeah, who then nearly died in the Second World War
as well.
Speaker 1 (04:54):
He did.
Speaker 2 (04:55):
Yeah, he got TB. I mean it was an illness
and he was in the hospital for weeks, and he
was a vat cuated from Jerusalem as he was there
when he heard the Saint David's Hotel being blown up.
So then you know, there so much could have happened
to both of them. Then, as I mentioned in my book,
the biggest chance event of all is my conception. It's
not just me, everyone's conceptions. An extraordinarily unlikely of it
(05:17):
so easily could not have happened. So me realizing that
and researching the situation of my conception, it really made me.
You know, it changed my whole attitude to life. In fact,
it really did. It made me think, God, I'm here
just by total chants in what I call these micro
contingencies that just accumulated, and here I am. And so
(05:38):
the idea somehow that I'm you know, on the earth
for a purpose, or I'm you know, in any way
special I find is now for me a complete illusion.
Speaker 1 (05:48):
When did you start getting interested in this relationship between poverty, statistics,
and medicine.
Speaker 2 (05:54):
Yeah, I was interested in the mathematical aspects of statistics particularly,
But then there was a job going and the funny
the job going was in nineteen seventy eight, and it
was to work on what was then called computer aided diagnosis.
Well now we've called it AI. So nineteen seventy eight
was using some basic statistical algorithms what is now called
(06:15):
naive base simple algorithms. It's still around and used as
a very basic machine learning algorithm, for example in a
spam filtering or whatever. And we were doing that in
the late seventies. So it's one of the first jobs
to work on algorithms in medicine for both diagnosis and prognosis.
We were working on the likelihood of someone with head
(06:35):
injury surviving and so on. And because the computers you
could even carry them around a thing. They were sat
in the corner with a huge, great machine with a keyboards.
It's incredibly clumsy to use, but we were doing that.
And then in the early eighties I was working on
more algorithms. Then we got into developments in AI and
so on. So you know, this stuff is not new.
(06:56):
It's been around for ages, what was predict addicted. It's
still going because that's an algorithm for predicting the survival
of women with breast cancer and men with prostate cancer,
still available, hugely, widely used. It's a very good statistical algum,
a regression algorithm. Of course, in practice, any actual clinician
(07:18):
making a decision with the patient would use much more
personal information that they might have the patient, because you know,
for example, physical status doesn't go into the algorithm, and
yet that might be you know, someone's basic underlying health
might be incredibly important. So that's when we wrote the
interface for a predict we try to emphasize not say
this is the risk of this patient. It's just what
(07:41):
would expect to one hundred happen to one hundred people
who ticked the boxes that she did or he did.
But actually people have tried different, more sophisticated machine learning
methods and they don't make much difference. And so it's
about as good as you can do. I think with
the data that is available, you could always do better
by collecting more data to and having a bigger, better database.
(08:02):
It's going to be marginal marginal benefits just from using
AI with the same data so the real you know,
benefit in the future, of course, is just by having
better data.
Speaker 1 (08:12):
So where did you say a few moments ago that
you doubted that AI would be a useful.
Speaker 2 (08:16):
Oh well, I mean it's going to be marginal, marginal
benefits just from using AI with the same data.
Speaker 1 (08:22):
What about other things like drug discovery.
Speaker 2 (08:24):
Or oh yeah all that, Yeah that's very important. No, no,
then it's going to be great, huge, huge benefits there.
Now I'm talking about predicting what's going to happen to
me in the future, because I've got prospect cancer so
I am quite interested in this. And of course when
I got it, I looked at all the algorithms and
which were sort of helpful, but they're very broad. All
they do really is tell you what we would expect
(08:45):
to happen to It is one hundred people who ticked
the boxes you've ticked, and of course everyone is so different,
Everyone's so different, so I know that that will give
me a broad figure, but it's only a ballpark figure.
It's still very useful until of my tenure survival, but
one that I know could be changed by just having
(09:06):
more information.
Speaker 1 (09:07):
So you spent your career trying to help doctors and
patients make better decisions about what to do when a
patient gets sick. But you've also lived this with your
son Danny, this experience of how to make medical decisions
in the face of horribly serious illness.
Speaker 2 (09:25):
Oh, that's interesting because I do think about that. We
did some decisions and I'm not sure, you know, maybe
we always say, well, maybe if we had taken him
to these, to Canada or something, and that we could
have got a different therapy. In a way, that's one
thing I prefer not to dwell on too much, because
you know, you don't know. But it has made me
(09:46):
very aware of the importance of making informed medical decisions,
and that's what I with my team we worked on
those on providing decision aids, not in any way to
encourage people to make any particular decision, but just as
I mentioned in the book, I don't believe that decision
theory and all the advances that have been done in
that can never tell you what to do, because it
(10:07):
assumes that you know all the possible outcomes, you know
all the possible options, you know all the probabilities and
the values, and of course this is totally infeasible apart
from the simplest sort of gambling type examples. You never
know any of these things. You never know how you're
going to feel in the future if something happens, and
so on. So it's impossible to be rational in those
(10:29):
situations and use decision theory. But I think it's really
helpful to try to examine the problem to face up
to a decision has to be made. One of the
biggest problems about decisions is that people don't actually make decisions.
I don't you just go. You just find yourself going
down a path, and you never just stop and say,
this is a decision point. There is a branch in
(10:51):
the road. We have to choose which way to go,
and sometimes you can recognize those points, but they're few
and far between, and so would I love to encourage
people to actually have much more of those decision points.
This is the time we have to make a decision.
These are the possibilities, the benefits and harms of the
options that face you. We're not going to tell you
(11:11):
what to do. We might be able to put some
rough probabilities. For example, for women in breast cancer, we've
got such a lot of data we can actually produce
reasonably good ten year survival rates and what the benefit
would be if you had chemotherapy. So, for example, in Cambridge,
unless you your absolute survival benefit is going to go
up by three percent with chemotherapy, they don't recommend chema
(11:34):
therapy because that means essentially that of all the people
that give chemotherapy, out of thirty people, only one will
benefit after ten years. Only you get one extra ten
year survivor for thirty people being given to chemotherapy, which
can have a really awful effect on people. And by
producing those numbers you can get a feeling that well,
(11:55):
you've got to have a reasonable benefit in order to
take the hit of the tree. So in those situations,
I think it's really good. You can't do it this
exactly to some idea of what the benefits might be.
But on the whole, you know, it's difficult to do
that in situations where you havn't got all that data
and all that analysis, all that tech behind.
Speaker 1 (12:17):
It coming up. Is it possible to predict murder? Stay
with us? Well, it just is remarkable to think that
(12:40):
kind of Hiding underneath all of these numbers and statistics
and maths are so many life and death decisions. And
the other thing I wanted to ask you about was
your work on one of the most prolific serial killer
cases of all time, the Herald Shipment case. Just for
a US audience, can you explain that case and what
(13:00):
your work on it was.
Speaker 2 (13:02):
Harold Shepman was a family doctor who, over a twenty
year period, murdered at least two hundred and fifty of
his patients and possibly up to four hundred without being caught,
of course, until he finally, rather stupidly forged, rather badly,
did a rather bad forgery of a will in order
to inherit some money. Absolute madness. And it was this
(13:24):
a woman whose daughter was a solicitor and looked at
this willness, it just didn't believe it. And so suspicions rose,
and finally he was arrested and they exhumed the last
fifteen patients that had died, and they all had incredibly
high levels of diamorphine heroin essentially in their bloodstream. I mean,
(13:44):
he only got away with it because there were never
any post mortems. There's many old people and they liked him.
He was a very trusted family doctor for many people.
He did a lot of home visits, and that's of
course when he murdered people. So when when someone went
back and looked at all the certificates of the time
of death, for most people, people died all times of
(14:07):
the day and night, the sort of uniform distribution of
the twenty four hours, Harold Shipman's deaths had a great,
huge spike between around one to three in the afternoon
when he did his home visits.
Speaker 1 (14:18):
And what was your involvement personally with the case.
Speaker 2 (14:21):
There was a public inquiry because the families quite really
on other people ask how do you get away with
it for so long? It's an absolute scandal, And the
public inquiry, I think very sensibly brought in quite a
substantial team of statisticians to look at the data, which
like the time of death data, but also the deaths
of all his patients when they had occurred, how it
(14:44):
compared with other doctors, how many would have been expected,
compared with how many ratually observed. And we used sort
of fairly standard industrial quality control methods to work out
when you could have spotted with considerable confidence when something
al was going on. So it's like industrial quality control
methods spot when a production line is going out a
(15:05):
bit out of kilter. They would have been developed over decades,
and we used those for his death rates and worked
out he could have been caught after about forty deaths,
or he could have been identified as being odd. In
other words, someone could have done an investigation. Now, Shipman,
when the algorithm that we developed was applied to a
thousand other gps without their knowledge, there were six who
(15:29):
as bad as Shipman.
Speaker 1 (15:31):
I used many deaths on their watch.
Speaker 2 (15:32):
Exactly why do you think that was? They were really
good gps who were working in retirement communities and who
were enabling their patients to die at home rather than
going to hospital by being really good caring gps, and
so they were signing a lot of death certificates. And
these were really good people, but they had very high
(15:54):
death rates. So I used this as an example all
the time about how statistics is about correlation not causation.
You know, if someone has got a high death rate,
it's an indication that someone perhaps should look at the data,
but you cannot conclude the cause for that.
Speaker 1 (16:10):
Well, just in the last few weeks, the UK has
announced an algorithm to predict the likelihood of committing murder.
Speaker 2 (16:18):
Well, those algorithms have been around for ages, but the
chance of predicting someone, all you'll do is find a
small change in odds. You're never going to be able
to predict an event like that. At an individual level.
You'll be able to predict someone's at someone it's slightly
increased risk. But there's so much puff befind these algorithms
to make it. You know, they get headlines, but actually
(16:39):
I'm deeply skeptical about their actual ability and certainly to
predict events like murders.
Speaker 1 (16:45):
You spend quite a bit of time working on public
inquiries and informing a public about various issues, and I
think you have one of the most interesting, if not
the most interesting title in British academia Professor for the
Public Understanding of Risk. Can you talk a little bit
about what that means?
Speaker 2 (17:00):
Yeah, I think I am the one and only Professor
for the Public Understanding of Risk because after I retired
they renamed it when the next person got the funding.
So this was a fascinating I'd been an academic and
was doing okay, it'd got a good reputation, but fancied
a change in direction from the normal business of writing
(17:21):
papers and all that stuff, and then a philanthropist David Harding,
a hedge fund manager, wanted to endow a chair in
Cambridge that was to do with the improving the way
that statistics and risk was discussed in society because he
got so fed up with all the stories in the
news and all the misunderstandings, and so he paid for
(17:42):
this chair. And if you gave three and a half
million pounds the University of Cambridge, you could have a
chair of absolutely anything.
Speaker 1 (17:50):
And he had the good grace not to name it
after himself.
Speaker 2 (17:53):
Exactly why it was the Winton. It was the Winton
Professor for the Public Understanding of risk, so which was
just fine because he was very good. I always give
my career advice for young people now is to say,
find a billionaire and get him to give you lots
of money to do what you feel like, because he
just gave the money and then completely hands off.
Speaker 1 (18:11):
In their capacity. What was the biggest misunderstanding you encountered
about how the public understand risk?
Speaker 2 (18:19):
Oh goodness, that's so difficult. I mean you could the
absolutely standard one, of course, which the media don't help
with is the difference between absolute and relative risk. So
you know, the media stories are always full of oh, well,
if you eat meat, it's going to increase your risk
of bowel cancer by twenty percent or so on. And
that's a relative risk. And I think it's actually true
(18:41):
that actually red meat is some process meat in particular,
is associated with an increased risk of bow cancer. And
that's what gets in the headlines increase risk. I've got lovely,
you know, headlines of the killer bacon soandwich and this
sort of thing. But when you actually translated, and I
talk about this all the time to schools audiences, when
they hear a story like this, they want to know, well,
(19:02):
you know, is this the big number? Do we care
about this? And to know that, you have to know
twenty percent of what, in other words, the baseline risk
of which there is a twenty percent increase. Now, the
baseline risk of getting boo cancer, sadly is about six percent,
about one in sixteen will get it during our lifetime, sadly,
and a twenty percent increase over those six percentage points
(19:24):
is about seven percentage points. So that's out of one
hundred people eating a bacon sandwich every single day of
their lives, one extra will get bow cancer because of that.
And that's a complete different way of reframing the story
to make it look frankly fairly reassuring, especially if you
like bacon savwiches. So it's a great example of this
(19:48):
difference between relative risks and absolute risk because percentage, the
word percentage is used for both. It's just in one
you're talking about a percentage increase and the other talking
about percentage points.
Speaker 1 (20:01):
When we come back, we break down the probability that
AI could lead to human extinction. Stay with us. When
the consumer internet boomed in the late nineties early two
(20:21):
thousands and Google came along, you could either click Google
Search or I'm feeling lucky and I'm feeling Lucky would
bypass the search results and take you directly to a website.
And this is basically a way for Google to flex
and say like, this is how incredibly good we are
in search. And they've since abandoned the I'm feeling a
Lucky button, but actually, in parallel, the whole Internet in
(20:42):
the last two or three years has become an I'm
feeling Lucky engine in the sense that you now get
a generative AI response rather than a selection of links
to follow, or at least you get both. How do
you see that incredible cultural shift of sort of outsourcing information,
summarization and predictions to large language models.
Speaker 2 (21:04):
I think it's great. I'm a real fan of the
AI summary. So as long as you know, just like
using any large language model, you have to grasp the
fact that it doesn't know anything at all. You know,
it is. All it does is string words together and
comes up with something that sounds plausible. Now maybe in fact,
as we all know, it comes up if it talks
(21:24):
about facts, it can be deeply wrong and say all
sorts of things that are just incorrect. So it has
to be taken with a huge pinch of salt when
it's saying anything factual. When it's summarizing an argument, or
perhaps you know, with a discussion on a topic, I
think it can be enormously helpful. I mean, if you
just ask it a fact, you know, what's the capital
(21:44):
of somewhere, then it'll generally be right. But I think,
as someone who's who worked on uncertainty in AI forty
years ago, we thought we'd solved it in nineteen eighty six.
Speaker 1 (21:56):
Well, how do you think you've solved it?
Speaker 2 (21:57):
Oh, because then the model's much more We in the
mid nineteen eighties, the way of actually handling probability, first
within rule based systems and then within basian networks was
really developed. It was extremely successful, but of course that's
in much smaller networks.
Speaker 1 (22:14):
We're living in this extraordinary moment. I mean, Jeffrey Hinton
has said there's a thirty percent chance that AI will
drive human extinction in the next twenty to thirty years.
There are rogue genetic scientists editing the human gene line.
There is uncertainty about whether the COVID pandemic was you know,
(22:36):
something creating a lab or something emerged organically. I muchine
what you say is timeless. But how do you suggest
navigating this particular scientific technological moment.
Speaker 2 (22:47):
Yeah, I think again by trying to think coldly about
it instantly. Jeff, when I mentioned working on AAR in
the nineteen eighties, I mean I was. I was in
Cambridge and Jeff was in Cambridge then, and we used
to think, oh, poor, because Jeff was going around saying, well,
these neural networks, one day they'll be big enough to
really be able to act in an intelligent way. And
we used to.
Speaker 1 (23:05):
Think poor Jeff.
Speaker 2 (23:07):
He's Gary is banging on about his networks again, Why
didn't he just give up? Because he was right. It
took a long time, but he was right.
Speaker 1 (23:17):
He was on tech stuff not too long ago. And
I asked him, how did you count with the number
thirty percent for the probability that AI will drive humans extinction?
He said, well, I knew it was more than one
percent and less than one hundred percent.
Speaker 2 (23:28):
It means a non trivial chance of this happening. Really,
I think obviously there is a danger of tech. I
mean in the book, I talk about surveys that have
been done of people, you know, looking at the chance
of existential risk to the population into the world in
general and from tech. And because people do have judgments,
like Jeff, does you know, I think it probably is
a non zero. Probably we could argue about how big
(23:48):
it was.
Speaker 1 (23:49):
How do you measure it?
Speaker 2 (23:50):
Oh, well, well you can't measure it. There's no measurement
because it's not a number. There's no truth out there,
so you can't measure it.
Speaker 1 (23:57):
So remember that, so you simulate different The best way
to approximate with simulation.
Speaker 2 (24:01):
Or I wouldn't believe any simulated futures either. I mean,
the simulating possible futures is fantastic method. We've used it
all the time in prediction work, and that's what's done
in a lot of weather forecasting as well. So but
it's a good idea. I just don't think you'd you'd
be so reliant on the assumptions in your models. No,
these are personal judgments. But just like an intelligence analysts
(24:22):
will be assessing probabilities even now about what will happen
in the Russia Ukraine war in a year's time and things.
So these are judgments that we should all be assessing.
I think is really valuable to work in separate teams
to come up with these judgments and the reasons for them.
So I like this sort of exercise, and I'm glad
you have put a number on it. I think my
number would be considerably lower, but you know he knows
(24:43):
more than I do. But so I think the crucial
answers once we get to something that's what you might
call a distinct possibility, what do you do about it?
You know, where are the controls? You know that you
need to think about where you don't just sit back
as casual partisips or that many of us will be
just an audience, but that's not true of the people
working in this area, or the regulators, or the people
(25:04):
who might be able to do something about it. So
I think it does, as people, of course have said,
you know, generate the question, well, okay, how can we
reduce that probability?
Speaker 1 (25:13):
I want to bring us back to the book, I mean,
which reads as a clarion call to learn to embrace uncertainty.
I mean, is that a fair characterization. What do you
hope that your readers will take away from this book?
Speaker 2 (25:25):
Yeah? I mean I always say it's not a self
help book, although see people do seem to get quite
a lot from it sometimes of the fact that you know,
by owning up to uncertainty, first of all, that it
shouldn't be something to dread. We live with uncertainty all
the time. We enjoy it. It'd be awful to be
certain about everything. Can't think of anything worse to live alone.
And I always ask audiences if I could tell you,
would you know, want to know when you're going to die?
(25:47):
And a few people would always just a few, they'd
like to be able to plan and things, and that
you know that somebody, but nearly everybody does want to know.
You don't read. You don't look at you know, on
a you know on a thriller series. You don't go
for the last efforts to find out what the conclusion.
You don't want to know the sports result before you
see the match, if it's recorded. And so the point
is that we live with uncertainty. I think we should
(26:09):
embrace it. It will never go away, but there are ways
to explore it.
Speaker 1 (26:13):
I want to close with this, David, you said recently,
my wildest prediction is that people will stop making predictions.
Speaker 2 (26:21):
Mm. Well, that's the one I would love. And what
I mean by that is predictions where they say what's
going to happen. And what I want is the Jeff
Hinton approach where you give probability, so what's going to happen?
And those probabilities may be good. I think Jeff's a
bit high. It may not be, but at least we've
got something we know where they are. He's not saying
it's going to happen or it's not going to happen.
(26:42):
I don't care about whether someone thinks something's going to
happen or not going to I couldn't care less. I
wouldn't their probabilities of whether it's going to happen. That's
why sports pundits when they're chatting on if you're just
chatting casually, you might say, oh, I think this is
the result. But of course anyone taking sports seriously doesn't
say who's going to win. Going to look they work
out the odds, because they're going to be going on
(27:03):
to the betting exchanges and checking if they can get better. Okay,
you know if there's differences between the odds they think
were appropriate the odds being offered by on their betting exchanges.
So serious sporting people only think in terms of probabilities.
Speaker 1 (27:21):
David, thank you so much for joining us today and
tex stuff.
Speaker 2 (27:23):
It's been a real pleasure.
Speaker 1 (27:30):
For tech stuff. I'm os Voloscian. This episode was produced
by Eliza Dennis and Adriana Tapia. It was executive produced
by me, Karen Price and Kate Osborne for Kaleidoscope and
Katrina Norvelle for iHeart Podcasts. Jack Insley mixed this episode
and Kyle Murdoch Rudel theme song. Join us on Friday
for the Week in Tech. Karen and I will run
(27:52):
through all the most important tech headlines, including some you
may have missed, and please rate and review the show
in Apple Podcasts, Spotify, and reach out to us over
email at tech Stuff podcast at gmail dot com.