Episode Transcript
Available transcripts are automatically generated. Complete accuracy is not guaranteed.
Speaker 1 (00:00):
We live in an age of tremendous uncertainty and we're
trying to get a handle on it. We're trying to
control our uncertainty, trying to control by using data around us.
How do we deal with that? How do we live
and get by without going crazy in our business, in
our lives in an age of really an age of uncertainty,
(00:23):
and we have today we're going to discover how do
you deal with that? Can you can you create your
own certainty or do you just learn to live with uncertainty?
And how do you deal with that in your business?
And today we have Sir David Spiegelhalter with us, an
expert in this area, wrote the book The Art of Uncertainty,
(00:45):
which I love the title of your book. David, and David,
it's great to have you with us on the Walta
By Show. And if you would give us a little
of your background, it's pretty amazing.
Speaker 2 (00:55):
Yeah, no, it's great to be here. Thank you so much. Yeah,
I'm David Spiegeholter. So I'm a man Meritus Professor of Statistics,
which at Cambridge University. So I used to be at
the university teaching students and so on and doing research
and doing lots of other work. I've recharged from the university,
but still doing lots of stuff, you know, writing books,
and I'm also non executive director of the UK Statistics Authorities.
(01:19):
I'm on the board of the National Statistics System for
the UK.
Speaker 1 (01:25):
That's awesome, and so you've been talking a lot and
studying a lot. But I certainly you say that, and
certainly is getting worse.
Speaker 2 (01:36):
I don't know. I mean, people say we are living
in it, really are in an age of uncertainty, and
I think there's been some other ages of unsaid. I
think in nineteen thirties was a serious age of uncertainty,
with an enormous unpredictability about a world efense, an enormous
lack of confidence about what might happen in the world.
I don't think we're quite as bad as I should
say the early sixties were, you know, around about Kennedy
(01:58):
and Cuban missile crisis. People were really terrified. But I
think also I think, now, yeah, there's some we've got
some justification to think we are also living in a
really quite uncertain world. I think compared and in some
way in many ways compared with how I grew up
you know, I'm a boomer born in nineteen fifties. In
a way, apart from these sort of slightly these distant
(02:21):
threats of nuclear war, it was a very safe life because,
you know, things we're provided with in the UK, with
a very strong post world welfare state that's free education,
free healthcare for everything really, and also growing up in
a booming economy with house prices rising and almost choosing
which job to have. And I've lived through that my
whole life. I've actually been incredibly fortunate. And that's one
(02:45):
of the things I got in the book about, well
maybe jumping ahead slightly, you know, to do with the
luck of when you were born in history. You know,
I think it's a fascinating thing. It's called constitutive luck.
Just what period of time and space and society you
happened to be born in. I mean, you had no
control of that at all. You came there, you are
in the world, and for summit's be's really good where
(03:08):
they pop out into and for some it's abst be
terrible where they pop into. So you know, it's the
sort of hand you've been dealt which you have no
absolutely no choice and no control of it. But which
of course is enormously influential for the whole of the
rest of your life.
Speaker 1 (03:21):
Well, and it would seem like so like you baby
boomer born in the fifties, and for me, I go, well,
life's always been pretty clear, right, I mean, there really
hasn't been a lot of uncertain over the last forty years.
And but then my son, he's very frustrated, says, I
(03:42):
don't know whether when I'll be able to afford a home, what,
you know, what what work will be like, what my
money is going to be like, I mean, he feels
so I actually have a different view of uncertainty than
he does. I think in largely based.
Speaker 2 (03:57):
On you know, my when I was born, like you,
I think it's a constitutive luck being born into a
sort of stage with a fairly stable society that's growing
economically and so on, all these opportunities, and of course
the next generation you know, is going to be poorer
than you know, my generation, so you know, and certainly
less certainly about job security, pensions for him saying you
(04:20):
know all this stuff. Yeah, no, I think kids today
have a tough time of it. And well, what it
is actually remarkable, It does demonstrate actually how well they
deal with it. You know, people changing jobs every so often,
you know, every really quite rapidly. You know a lot
of personal insecurity, and yet they get on with it.
I'm full of admiration because I think my generation had it,
(04:45):
had it very lucky, as are constitutive luck as to
who we were born. I always think my grandfather, which
I talked about the book, had of course, that's rather
bad constitutive luck. And being born in the generation just
time to go into the First World War, to go
into the trenches. Now, that is not a great time
to be born. And you know, again he didn't choose that,
it's just where he ended up.
Speaker 1 (05:03):
So how do you think and certainly actually affects us
as humans. I mean, it's got to change the way
we look at how our minds even.
Speaker 2 (05:11):
Work enormously, and some people are better at dealing with
it than others. I mean, the you know, there are
it can get to it. People have difficult degrees of
tolerance of uncertainty, and you can there are clinical scales
where you measure your tolerance of uncertainty, and if you
have a very low tolerance of uncertainty, you can actually
lead to a clinical diagnosable condition because you just can't
(05:34):
bear the doubts about what's going top. You want control
over everything, you on to micro control, You want everything
to be absolutely certain the whole time, and because that's impossible,
and so part of being human, I think very important
part of being human is first of all, to acknowledge
how little, how little control we have over our lives
a lot of extent, and to be able to deal
(05:54):
with it, to be resilient to it. And that's when
I give business talks I do, you know, corporate talks
and things like that, as actually I talk about personal
stuff about trying to develop you know, resilience to uncertainty
and how of course that's important in order for organizations
as well, for all organizations. There's this lovely concept which
(06:16):
I've only really read about recently, called safe uncertainty, and
it was developed in therapy because therapists found that people
were coming towards them in a situation which you might
consider unsafe uncertainty. They were anxious, they didn't understand what
was going on, their panic attacks, The world was a mystery,
(06:36):
it's chaotic, and they were really unhappy with it. So
unsafe uncertainty and the patients might think. Your clients might
think that they could be cured by the therapists. In
other words, they could be turned into a situation a
safe certainty. You know, I've worked out or what the
problem is, everything sorted out, you know, the world is now,
(06:57):
I feel control of the world, and I've feel very safe.
And because this is a delusion as what the therapists
could actually achieve, they couldn't actually bring people to safe certainty.
So the emphasis was shifted, I think very appropriately to
it taking clients to safe uncertainty. In other words, you
still can't control everything that's going to happen to you.
(07:19):
Stuff is still going to happen. However, you may have
learned to deal with it. You may have developed that
resilience so that all that unpredictability, inevitable, did not upset you,
did not make you feel incredibly unsafe. So this idea
is safe uncertainty, I feel. I think it's a really
powerful idea. And again I think it's a powerful idea
(07:39):
for organizations, because you know, businesses or anybody else, you
can't control what's going to happen to you. As we're saying,
especially in the world at the moment, so you have
to be ready to deal with it, and you have
to be rather than being completely upset by the sharks
and being put into a panic by what's going on,
you have to develop essentially resilience ways of dealing with
(08:00):
it so that you actually and if possible, have made
even stronger because of those shocks that you're experiencing. But
you can't expect to have that uncertainty reduced. You can't
avoid it.
Speaker 1 (08:12):
Okay, So so let's talk about how do you deal
with that? And let me give you a specific section.
I'll give you my situation. I'm I'm an open book here,
and a couple of years ago, we found that we
had a bunch of copycats and they were out there
and this was not something we were expecting. I mean,
we'd been in business for many, many years, never had
(08:33):
that situation. All of a sudden, they start popping up,
they start using our stuff, and it just created this
enormous upset in a business. How do you how do
you how does a business owner in particular, because they're
dealing with you know, entrepreneurs, we deal with more uncertainty
(08:54):
than most people because right because we like literally are
trying to you know, control our business, for our customers,
for our employees, for ourselves, our families, and all that
kind of stuff. So how do you actually learn to
either either predict that uncertainty or deal with that uncertainty?
Speaker 2 (09:17):
Yeah, I mean I can't talk about that specific or,
but I think first of all, I should say that personally,
I would be useless at it. I am not an entrepreneur,
and I'm really glad i'm not because I don't have
the personality for it. I'm just going to a state
and say, oh my god, I'm around like a chicken
without a head. So this is the first I'm talking
about theory. There's a reason I have never had, done,
(09:42):
had any entrepreneurial activity whatsoever. So I'm really shrewd like that.
So I know what, I know what my limitations are. However,
in theory I can I can speak to lekas Come
that since you talk about trying to regain control and
be in control, the first thing is you can never
be in complete control, so and you can try to
(10:05):
of course, in visage. I like this word possible futures.
You know all the shocks that can happen, all which
are good things as well. I really love this idea
of possible futures and that requires imagination something I've got
none of them, absolutely hopeless. But you do require and
possibly quite almost a malevolent manager. You don't think that
anyone's to be so nasty as that. Oh that's very nice, rankly,
(10:28):
that's very nice. Just the sort of thing I would do,
Just the sort of thing because I'm hopelessly naive and
optimistic and i never think that bad people having bad
motivations or anything like that. It just doesn't cross my mind.
So you've got to have, you know, a bit of
that mindset to do it. And that's also I like
this idea of red teams, which is, I think, you know,
(10:49):
fairly standard management thing that you have a group of
people who deliberately trying to think of the nasty stuff,
who deliberately try to avoid the group, think the agreement
cozy nurse that you might have developed through success or
through stability, who deliberately try to shake, you know, disrupt things.
Think do you realize what could happen? Do you realize
(11:09):
there's people trying to do this sort of stuff. So
the idea of red team and as the people like
the Ministry at the UK Ministry of Defense are big
proponents of red team mindset because I think they've clearly
had been subject to various disasters where everyone has been
very cozy and got into an area of group thinking,
all the greedy was going on, and then suddenly something
happens that nobody had thought of. And so to try
(11:33):
to avoid surprises, you need to be able to think
in a slightly more nasty and malevolent way of what
they possible things that could be done against you were
and of course then prepare for them in advance, because
if you have at least thought about them, they're on
your radar, They're not the same surprise or same shot,
and you may not be completely ready. And so that's
the idea in a way of robustness of trying to
(11:55):
be able to be ready, to be able to recover
from things that you've thought of the list of that's
not enough though.
Speaker 1 (12:03):
So you you lived in the world of statistics for
most of your professional life, right, and statistics. I remember
my statistics classes in college, and it's all about probability. Okay,
So how do you use probability to kind of help
deal with uncertainty?
Speaker 2 (12:21):
Oh? Absolutely all the time. And that's again because if
you can think of possible futures, you can assess probabilities
for them. Now, some of those you can you've got
good data. If you're into sports betting or something like that,
and which many of my colleagues are, you can build
statistical models and you can produce probabilities for the results
(12:43):
of matches and things that use them for setting odds
and for beating you know, odds that are being offered
to you. So that's actually standard way in which, because
you're never predicting the future, the first thing is to
get rid of that idea completely. You cannot predict the future,
but what you might be able to do is to
think of some reasonable odds for the future. And that
holds whether you're sports betting or election forecasting or anything.
(13:07):
In situations where there's quite a lot of data, and
with a bit of shrewd under not just the data
but understanding of the situation, you can get some you
should be able tossess some really good odds, and that
can be enormously helpful to hedge your bets, either to
place bets or to hedge them, et cetera, et cetera.
And again that's what a good hedge fund will be
doing as well. They'll be reducing probabilities of future events
(13:30):
and therefore using that to assign there their trading. So
that's absolutely you know, in a way standard you know,
algorithmic trading or sports betting or something like that. The
problem comes in much more open ended situations where you
can't even list the possibilities. We're into sort of Donald
Donald Rumsfeldt's unknown unknown, it's really unknown in the unknownn's
(13:53):
the known unknowns, all right. That's where you can put
probabilities on because you've listed the possibilities, and by using
data or judgment sometimes you can just use judgment, you
can put probabilities on things. The problem comes when you
can't even list the possibilities, and that requires that's really difficult,
that's requite. And I mean, now, you know, think of
(14:14):
the world we're in at the moment. Can you really
list all the possibilities of what's going to be happening
in the US in two years time? You know, if
he gave you a list of questions, you could think
of something, But I think it'd be the difficulty feeling
that you'd completely explored the space of possibilities in your imagination.
So this requires particular skill, I think talent in imagination. Again,
(14:36):
the UK Ministry of Defense employees science fiction writers, they've
got some great right stories about things like that. I mean,
could you have ever written a story about the last
UK drone Ukraine drone attack on Russia, taking the drones
into Russia and launching them from right nearby out of wagons.
The top opens up, all these drones shoot out and
(14:58):
destroy the aircraft which they had per pussfully aimed to
you know, get collected together in one place. Well, a
staggering thing. Imagine Russians didn't think of it because and
they didn't have the imagination to think about it. And
it's hardly surprising because I think of the imagination you
need to think about that and to then produce the wow,
(15:18):
the counter, the counter uh you know, the defense against that,
because it's pretty tricky. But it really was a staggering
surprise to people.
Speaker 1 (15:28):
Yeah. So so so here we are in a in
a world where data, I mean, it's pretty easy to
access data. I mean twenty years ago it was much
more difficult to access data. But what we have is
we have a lot of data that's being interpreted and
then presented to us, and I find that a lot
of that interpretation is manipulative. Yep, and and it's and
(15:54):
and so I don't trust it. I mean, I'll read something,
I'm going, okay, show me, show me where that came from,
because I don't trust what your your conclusions about the data.
You know, there's an old, uh, the oldest accounting joke,
I'm an accountant, The oldest accounting joke in the world
is you go to a good account and you ask
them what's two plus two? And a really good accountant,
(16:15):
their answer is going to be, well, what would you
like it to be?
Speaker 2 (16:18):
Yeah, exactly.
Speaker 1 (16:19):
And that's to me, that's that's really what I got
out of my statistics classes is that, hey, I can manipulate.
I can I can make this look however I want
it to look. So what do you do about that?
How do you actually? I mean, if you're not a scientist,
you're you're a uh, you're you know, you're an expert
in this, but most business owners are not. So what
(16:39):
do you do to deal with that kind of data manipulation?
Speaker 2 (16:45):
Hi?
Speaker 1 (16:45):
Everyone? Tom wheel right here, CEO of wealth Ability and
Robert Kiyosaki's personal CPA for over two decades. You're listening
to the Waltability Show, where we help entrepreneurs and investors
make way more money and pay way less taxes legally.
Before we dive in, there are two big opportunities I'd
like you to know about. First of all, if you're
(17:06):
a CPA, join me and Robert Kiyosaki this July sixteenth, seventeenth,
and eighteenth in Park City, Utah at the Tax Strategy Conference.
Check out the link in the description below. And if
you're a business owner who's ready to stop overpaying the
IRS and start using the tax law the way the
wealthy do, check out the wealth Ability Accelerator. Go to
(17:29):
wealthability dot com slash bonus.
Speaker 2 (17:33):
Yeah. I regard myself, as you know, as a good statistician.
As you said, I can make any number look big
or small, frightening or reassuring by the way I tell this,
not by lying about the number, just by the context.
I give it, the faming, give it the story I
can tell it. I can. I can produce any emotional
response I think I want from any numbers. It's just
the skill you pick up partly by watching what other
(17:54):
people do and learning from them. So what do we
do about that, and I think you're absolutely right. What
you should be looking for is trustworthy communication of evidence
and numbers. And you know there's a lot of untrustworthy communication.
So what do you look for for trustworthy communication? And
I know it's incredible. So first of all, you know
(18:15):
people have studied trust. And the first thing you should
look at, you know, when I hear a number or
a claim, I don't look at the number or the claim.
I look at who's telling me this? And you know, so,
and what you should look for if you want to
see where they's trusted. You want people who are honest,
but competent and reliable. So it's good to take those
(18:37):
off in your mind. Honest, competent, and reliable. Now, people
you know can be really nice. I've got friends who
are honest and competent, but they're so unreliable that I
wouldn't trust them anything so because they won't turn up.
So you know, you've got to have all three. Yeah, exactly.
(18:57):
We all know those sorts of people, So you gotta
have all three. So that's one of the first things
to look at the source. And so I don't when
i'm again, when I hear a statistical claim, I go
look at the number. I couldn't care less, and then
I asked, why am I hearing this? And so, you know,
what is this person trying to What response is this
person or organization trying to arouse in me? Why are
they telling me this? This did not arise from nothing.
(19:20):
There's a reason this story is being told to me,
and generally it's to manipulate my emotions to make me
buy something. You quite often to go, wow, that's interesting.
I mean you think of social media hearing this stuff
purely trying to grab your attention, and it grabs my attention.
I mean, I love puppy and duckling videos. They'll grab
me just like anything. And you know, then they've grabbed me.
(19:41):
Who knows what else I'm going to get up on
my feed then, so you know that they know how
to do it. So you know, why am I hearing this?
Why am I seeing this? And then the really difficult
thing is what am I not being told? So you
know I'm hearing this number, well, you know, how does
this select? What's the cherry picking going on here? What else?
(20:03):
What am I not hearing? That's the really difficult thing
that requires some practice and imagination. And also deeply suspicious
cynical mind. Now let not skeptical mind to work out
what is this person trying to do? And this is
before even looking at the number. Then you can start
looking at the number and think, well, I do I
(20:23):
actually believe it? Are they lying? What definitions they are
using and things like that, And then you can look
at the claim. You know, the number might be right,
but the conclusion they're drawing might be deeply biased and unreliable.
So you know, this sounds like hard work, but actually
I think one can get very used to it with
a certain degree of skepticism. You know, this is what
they should be teaching, and I know our teaching to
(20:45):
some extent in schools. Every kid should grow up knowing
what the triggers are, how to tear apart a number
that they're hearing on the social media, on the news
feed or something like that, and to be asking these questions,
is this really a big number? If someone tells me this,
you know, eating blue breeze decreases my chance of something
or other by eight percent? You know, do I take
(21:07):
any notice of that? Whatsover? Even if it's true, let
alone if it isn't true. So I think this is
a necessary skill for every citizen in the modern world
because we're being bombarded by the tsunami of rubbish. As
to put it politely, all the time.
Speaker 1 (21:26):
Well, it's really essentially critical thinking, right, And one of
the challenges is we have so much coming at us
that it's very easy. And I see this. I actually
think the baby wom was having to have it. We
have an advantage when it comes to AI and all
of this data because it was harder for us to
(21:49):
determine the answer when we were growing up than it
is today. Right today, you know, you get on chat
GPT and here's the answer, right, I mean it's it's
really simple. You got to go and now it may
not be the right answer, but there different two different,
two different things. But my point is that that it
is this critical thinking and one of the challenges so
(22:11):
I'm in a profession of skeptics. I mean, nobody's more
skeptical than the accounting profession. The challenge is is you know,
even if it's not you and it's somebody on your team,
is having that person, Yes, there'll be a skeptic, but
that doesn't mean skeptic, means stop skeptic. Means go deeper, right,
(22:32):
And that's what I hear you saying, is that you're saying, Okay,
don't just say, well, I'm skeptical, therefore I don't believe it.
Therefore I'm done. I still need to understand. Okay, well,
but why why why are they saying what they're saying,
What is their motivation? Where is this coming from? And
is there something that I could learn from this that
would actually help me with my business.
Speaker 2 (22:50):
Exactly, because if you just if skepticism goes into complete cynicism,
when you just reject and just say, oh no, I
don't believe that, I don't believe what those scientists say,
I don't believe those proba, then it's really hoping. What
you're doing is narrowing down. It's such a narrow and
possibly very biased source of information. It's hopeless. And you
can see that polarization happening, I think in US domestic politics,
(23:14):
but there you know throughout the world that that social
media encourage this polarization where you know, one side is
just just rules out everything that they hear that they
don't like, which is just so that's what I try
to counter that myself by listening to people I don't
agree with, by following people I don't agree with, try
to work out what are they saying because quite often
because there is something there, you know, even though in
(23:35):
people I really don't, I think, yeah, I know they
have got a point. I can see why they're you know,
they're saying this. I can see why. In fact, why
people are believing them and following them is you know,
they're leading them I think into an inappropriate direction. But
actually they've got a point. Otherwise people wouldn't take any
notice of them. And so you you do have to
do that, and it's hopeless if you just rule out,
(23:57):
you know, also the sources of information you don't like,
and because you're not going to learn for stuff, and
you're not going to have anything that could bring you
once or change your mind or anything like that. So
I think we do have to try to keep an
open mind and keep to a level of skepticism but
not cynicism, so that we do question what we're told,
(24:20):
but we don't reject it out of hand.
Speaker 1 (24:22):
I like that, you know that. It's kind of like
people say, well, there's two sides of a coin. I'm
going I think there's actually three, because there is the edge,
and I think it's it's I find that the greatest
skill is to be able to be on the edge
and look at both sides, which I is what I'm
hearing you say is it's like so I always want
to read both sides of the argument, and I want
(24:44):
to just read the side of the argument I tend
to agree with. I want to see what the other
side is because there's a reason that they have that side,
and what is that reason? And what am I missing?
Because sometimes I go, How's how can somebody even think
that way? Yet they are thinking that way? And it's
not just a small percentage of the population, could be
a large percentage of population, and why are they thinking
(25:07):
that way? Because when I'm thinking about my business, I'm going, Okay,
well I have to I have to deal with that,
you know, that other that other thought process.
Speaker 2 (25:15):
Right, yeah, yeah, yeah, although i'd be the two sides
of the coin is quite interesting because I got one
that here and I always carry this is a beautiful
two pound Elizabeth the second coin, and it's got two
heads on it, So I always carry a two headed
coin as illegal but never mind, you know, just in
case anyone required, I have to flip a coin at
(25:36):
any time. It's always useful to have with me. But
it's also a good demonstration that, you know, if I
ask people the audiences, well, what's the chances will come
up heads and they say fifty to fifty and then
after they say no, it wasn't actually you trusted me,
Oh you trusted me. It's all these probabilities you were setting,
you know, thinking we're based on assumptions and judgments. We
have to realize that all these judgments we're making about
(25:58):
what might happen and what these problems are always based
on assumptions and trust. I think that so we have
to be really careful again about sources of information. And
because don't trust me for a stum.
Speaker 1 (26:12):
Well, let's go to the biggest source of information right now,
which is AI. So that's where everybody's going, whether there's chat,
GPT or perplexity or one of these other AI tools.
How do you deal with that? You know, I mean,
even if you're saying, well, what's the probability that they're right?
You know, how do you deal with those sources of
information and whether you do trust them? Because we're being
(26:35):
told that, hey, well this is the population of information.
Of course it's right.
Speaker 2 (26:39):
Yeah, well, first first thing idea is I use it.
I tell you I quite like Claude for some reason.
I think because my daughter put me on too, that's
the And I use it all the time. I used
it in my work. I use it for writing book.
I use it for just just checking things and find out.
I mean, I don't believe it necessarily, and so I
feel if it's telling me something importance. But it's very
good often scanning a problem and getting an idea, making
(27:02):
a list of stuff, oh god, I didn't think of that,
and quite good at digging out sources and so on,
and even just the Google a eyem now comes up
automatically on the searches. It can be wrong, but it's
just like you know, I regard it just like you know.
It's almost like a twelve year old kid who's just
read everything you know on the internet and then spout
(27:22):
stuff out without actually understanding anything of what they're saying.
But it just got this amazing memory can spout out
all this stuff that they've absorbed. So then in a
way they're not trustworthy, but because they make mistakes. But actually,
and this, but this is really important. It depends, of course,
(27:43):
on the what you call the system prompts. Actually, I
feel they are trying to be honest, they are trying
to be balanced. They are they're doing their best. Now
they may be wrong, and they may have got things
a bit skewed, a bit wrong, and you know, make
mistakes and things like that, but at least they're doing
their best. They're not actually trying to deliberately manipulate me up.
But maybe I'm being a bit you know, not even
(28:03):
that way, because it depends on the system prompts. We
know what's been put into the system, which will be
things like, you know, don't be racist and don't tell
people how to make bombs, and don't do this that
and the other, and all this stuff goes in at
these overriding prompts at the top. But because people could
put anything in, you can't find them. They're not going
to tell you them. And so it seems to me
(28:25):
that's the crucial thing is that I would like my
AI and the future to I quite like somebody to
be checking the system prompts that they're not deliberately.
Speaker 1 (28:35):
I know they're not biased.
Speaker 2 (28:36):
I prefer error them bias, right, right, I prefer you know,
it's like all status. What a status strives to do.
They try to find a sample. It may not give
the right answer, but what they want is an unbiased sample.
That's the crucial thing. Because error you can deal with
in a way. You can allow for that and adjust
for that, and you've got ways to deal with it.
But bias, if you don't know about it, messes you
(28:58):
up regardless. And so I want systems that can have
error in them. But now even better, of course, were
a system that's not biased and has got some idea
of its error rate. And so I know that some
systems and people are working this a lot, so that
when it's a system makes a claim, it'll give some
idea of its confidence in the claim. And I worked
(29:20):
on uncertainty in AI in the nineteen eighties. You know,
we thought we had it solved. There were conferences. The
first conference on uncertainty in Artificial intelligence was in nineteen
eighty six. You know, I contributed to it, for heaven's sake,
may forty years ago. Anyway, we thought we'd solved it,
but apparently we haven't. So, you know, because the problems
are still there, but they are active problems. And I
(29:40):
would also like my system to have some humility and
not just to set at the bottom. Oh, I make mistakes. Frankly,
I don't think that really is twelve year old. You
know that really is twelve year old. I would like
something that had some humility in some understanding of its limitations.
Speaker 1 (29:54):
So you talk about the bias on the input side basically,
but what about what about your own bias and how
do you deal with that? Because it seems to me
like even if I had a completely unbiased, accurate tool,
if I don't ask it the right question the right way,
I'm going to get an answer that I that is
really not appropriate because I'm not really asking a good
(30:15):
enough question exactly.
Speaker 2 (30:17):
So I think in actually a really good assistant should
also look at the prompts it's being asked, and so
you might want to ask this in a different way.
I don't think you're exploring it. It would also suggest
the prompts. It'd have some method knowledge of what would
be a better prompt that could do it. And certainly
some of the ways the systems are exploring uncertainty is
to when they're asked to given a prompt deliberately, they
are you know, find a range of prompts that are
(30:39):
sort of similar but not quite worded the same, and
actually generate those themselves and follow those through and see
what the range of answers is they get. And that's
really nice because it gives you an idea of the
sensitivity or the lack of robustness of the answer to
how that question was actually asked, because you don't want
you want a robust response that would that that isn't
(31:00):
so dependent on every single little choice of words you
might use. So there's all sorts of developments that you know,
I think are going on that will appear in the
future and will make the stuff even better to use.
I mean, I'm you know, I drive to share anxieties
and many people that it can become a little bit
too easy to use, and of course it can become
a bit too intelligent. And that's why I'm really interested
(31:22):
in the in the system prompts and the I want say,
controls on it, but the and I'm not big on
regulating everything, but at least transparency is transparency. It's like
recommendation algorithms on social media, you know, I don't you know,
or misinformation on social media. I can't control it. I
(31:44):
can't censor what people put on YouTube a thing. I mean,
it's impossible, you can't censor it. However, what you could
have is some transparency about what the recommendations algorithms are.
Was misinformation, you know, deliberately, which I think is essentially
that is the outcome of it being pushed to people,
because it just generates a clique, It generates attention and
(32:07):
generates that period of attention, which is what every algorithm
is trying to get. I like that.
Speaker 1 (32:14):
I love your idea of having somebody else help you
with this. Of course, I'm an advisor and I think that,
you know, sometimes we just can't learn everything. I mean,
I'm never going to be a statistician, for example, so
I'm never going to be the one to you know,
come up with those probabilities. It's not my profession. I'm
your profession. You're really good at that, but you're probably
(32:34):
never going to be a tax you know, tax professional
or an accountant like I am. This is where I
think we can actually get benefit from, you know, our
advisors and the people that we have on our team
around us, although we have other people with different views.
Speaker 2 (32:51):
If you if you're in the UK, I'd even now
be asking you for some tax advice on VAT. So yeah,
we all need help. We all need help. I mean,
if you're a doctor, i'd start asking you about my
different problems. So yeah, we all need help, and I
help people. I like doing that. And in the end,
I know you want AI to be something an assistant,
(33:15):
an intelligent assistant, but with some humility you went in.
A good advisors know what they don't know and actually
warn you of that, and know the limitations of their
knowledge and try to make sure that they don't go
beyond what they what they feel confident about. And so
that's what you expect from a trustworthy advisor. If you
(33:35):
go and talk to somebody, you talk to a doctor,
that's what you want. You want him to really do
it the best he can, but to admit when he's
stumped in sense or where there's where there is real
uncertainty and you have got options ahead of you that
you might be worth talking to somebody else about or
really reflecting on deeper. This is what you would want
when you go and see your doctor. Well, that's what
I want when I talk to AI.
Speaker 1 (33:58):
You know, I love that I go back to what
you said earlier that you want the person you're getting
information from to be honest, competent, reliable, and you want
to understand their motivation and and and when you look
at an advisor, whether it's a doctor, okay, are they honest,
are they competent, are they reliable? And what is their motivation?
(34:19):
Because you can pretty much tell is their motivation I
want to help you get better? Or is their motivation
I want to run more tests so that I can
make more money. Right, that could be a different motivation.
And so I think those are really four good things. Honest, competent, reliable,
and motivation. I think that's very practical and I really
appreciate that. David. Again, the book is the art of uncertainty,
(34:42):
how to navigate? How to Now.
Speaker 2 (34:46):
I've got the US version with me, but this is it.
Speaker 1 (34:53):
This is absolutely a terrific topic, terrific book, Sir David.
It's been fabulous having you on the Waltability Show and
any final words for.
Speaker 2 (35:03):
Our audity, great fun talking to you, honestly, And you know,
next time I would like some tax advice please, it'd
be really.
Speaker 1 (35:11):
You got because remember what we do live in uncertainty,
and that is the way life is we can't change that.
I think the idea of understanding what's behind information is
so important. I think, David, you've really given us a good,
really good, some simple tools for that, and because when
we do that, what I do know is we'll always
(35:34):
make way more money and pay way less. Well, say
everyone next time.
Speaker 2 (35:39):
Thank you, okay by, thank you so much, being great.
Speaker 1 (35:43):
Thanks for listening to the Waltability Show. If today's episode
gave you a new perspective, remember this. The tax law
is not your enemy. It's a roadmap, and when you
know how to follow it, you can build real lasting well.
If you're a business owner or investor who's tired of
overpaying tax, the Wealthability Accelerator is your next step. You'll
(36:03):
have the opportunity to work directly with me for eighty
percent less than my standard rate, and I'll personally guide
you through how to change your facts so that you
can change your tax. Go to wealthability dot com, slash
bonus and apply today. Remember it's not just what you make,
it's what you keep. This podcast is a presentation of
(36:31):
rich Dad Media NETFLA