All Episodes

July 22, 2025 • 57 mins
"[Question: So what was the biggest misconception for most business leaders usually when it comes to operationalizing AI governance?] Based on my interactions and conversations, now suddenly they think they have to erect a whole set of new committees, that they have to have these new programs. You almost hear a sigh from the room. Like, oh, we have now this whole additional compliance cost because we have to do all these new things. The reason I see that as a bit of a misconception, because building on everything that was just said earlier, you already have compliance, you already have committees, you already have governance. It's an integration of that because otherwise guess what's gonna happen? We all know that this is the next thing around the corner that's gonna pop up, whatever it's gonna be called. Are you gonna have to set up a whole new committee just because of that? Then the next thing, another one." - David Hardoon

Fresh out of the studio, David Hardoon, Global Head of AI Enablement at Standard Chartered Bank, joins us in a conversation to explore how financial institutions can adopt AI responsibly at scale. He shares his unique journey from academia to government to global banking, reflecting on his fascination with human behavior that originally drew him to artificial intelligence. David explains how his time at Singapore's Monetary Authority shaped the groundbreaking FAIR principles, emphasizing how proper AI governance actually accelerates rather than inhibits innovation. He highlights real-world implementations from autonomous cash reconciliation agents to transaction monitoring systems, showcasing how banks are transforming operations while maintaining strict regulatory compliance. Addressing the biggest misconceptions about AI governance, he emphasizes the importance of integrating AI frameworks into existing structures rather than creating entirely new bureaucracies, while advocating for use-case-based approaches that build essential trust. Closing the conversation, David shares his philosophy that AI success ultimately depends on understanding human behavior and asks the fundamental question every organization should consider: "Why are we doing this?"

Episode Highlights:
[00:00] Quote of the Day by David Hardoon #QOTD - "AI governance isn't new bureaucracy."
[00:46] Introduction: David Hardoon from Standard Chartered Bank.
[02:02] How David's AI journey started with human behavior curiosity.
[07:26] Governance accelerates innovation, like traffic rules enable fast driving.
[10:31] FAIR principles in MAS Singapore born from lunches with compliance officers.
[14:23] Don't reinvent governance wheel for AI implementations.
[24:17] Banks already manage risk; apply same discipline to AI.
[28:40] AI adoption problem is trust, not technology.
[34:21] Autonomous AI agents handle cash reconciliation with bank IDs.
[36:00] AI reduces transaction monitoring false positives by 50%.
[39:54] AI requires full supply chain from infrastructure to translators.
[41:52] Organizations must reward intelligent failure in AI innovation.
[44:47] AI hallucination is a feature, not bug for innovation.
[47:35] Measure AI ROI differently for innovation versus implementation teams.
[56:27] Final wisdom: People always ask "why" about AI initiatives.

Profile: David Hardoon, Global Head of AI Enablement, Standard Chartered Bank Personal Site: https://davidroihardoon.com/ LinkedIn: https://www.linkedin.com/in/davidrh/

Podcast Information: Bernard Leong hosts and produces the show. The proper credits for the intro and end music are "Energetic Sports Drive." G. Thomas Craig mixed and edited the episode in both video and audio format. Here are the links to watch or listen to our podcast.

Analyse Asia Main Site: https://analyse.asia

Analyse Asia Spotify: https://open.spotify.com/show/1kkRwzRZa4JCICr2vm0vGl

Analyse Asia Apple Podcasts: https://podcasts.apple.com/us/podcast/analyse-asia-with-bernard-leong/id914868245

Analyse Asia YouTube: https://www.youtube.com/@Analys1eAsia

Analyse Asia LinkedIn: https://www.linkedin.com/company/analyse-asia/

Analyse Asia X (formerly known as Twitter): https://twitter.com/analyseasia

Analyse Asia Threads: https://www.threads.net/@analyseasia

Sign Up for Our This Week in Asia Newsletter: https://www.analyse.asia/#/portal/signup

Subscribe Newsletter on LinkedIn https://www.linkedin.com/build-relation/newsletter-follow?entityUrn=7149559878934540288

Mark as Played
Transcript

Episode Transcript

Available transcripts are automatically generated. Complete accuracy is not guaranteed.
(00:00):
So what was the biggest misconception for most business
leaders usually when it comes tooperationalizing AI governance
then? Based on my interactions and
conversation is that now suddenly they have to erect the
whole set of new committees thatthey have to have these new
programs that they have. You almost hear a sigh from the
room. Like we have now this whole
additional compliance cost because we have to do all these

(00:21):
new things. And the reason I see that as a
bit of a misconception because again, building on everything
that was just said earlier, you already have compliance, you
already have committees, you already have governance here.
It's an integration of that. Because otherwise, guess what's
going to happen. We all know that this is the
next thing around the corner that's going to pop up, you
know, whatever it's going to be called.

(00:41):
Are you going to have to set up a whole new committee just
because of that? And then the next thing, another
one. Welcome to Analyse Asia, the
premium podcast dedicated to dissecting the pulse of
business, technology and media in Asia.
I'm Bernard Leong, and today we are diving into the critical
question of how financial institutions can adopt AI
responsibly at scale. With me today is David Hadoon,

(01:03):
Global Head of AI Enablement at Standard Chartered Bank, a
distinguished leader with over 2decades of experience spanning
financial institutions, academia, startups and
regulators. David played a pioneering role
in shaping Singapore's national AI strategy for the financial
sector during his time at the Monetary Authority of Singapore,
where he was instrumental in developing the FEAT Principles

(01:26):
for Ethical AI use in Finance. He combines deep technical
expertise with strategic insight, making him one of the
most influential voices in advocating responsible AI
adoption globally. So, David, welcome to the show.
Thank you very much, Bernard. And you're being far, far too
kind. I simply had opportunities.
I'm privileged to support and this is the space that I love,

(01:48):
so I'm very privileged in that particular aspect.
So, David, you had a career spending academia, startups,
regulatory bodies, and now global banking.
What drew you first to data and AI?
What drew me first to data AI? So my usual response is, is
going back to a story when I was16 and I say detention, which is

(02:10):
true because that's actually what ultimately drew me to the
world of programming and Pascal.But on a broader sense, what
drew me is actually trying to understand better human
behavior. And I know this may sound a bit
weird because like, what do you mean?
But if you go back to the originof artificial intelligence and
what not, actually, it's Ruth's psychology.

(02:31):
It's rude as neuroscience. It's it's rude as really kind of
unlocking and raveling the mystery of us.
That's what drove me and to maybe combine the two things
together of the behavioral psychology, the the human nature
aspect of things and programmingthat some, you know, realization
that wait, wait, hold on a second.
We can actually program a computer to do this interesting

(02:54):
task like a baby where young child where you presented them
pictures of a car and a motorbike or what not.
And even without fully understanding the context of
what that thing is, they can learn to distinguish it.
Honestly, I was hooked. That was the beginning of the
journey. So I think what are the
formative experiences that actually shape how you now

(03:16):
approach Ain analytics? I think that goes hand in hand.
You and I have private dinners discussing about this.
You know data is extremely important.
Yeah. OK.
So I, I'm kind of, I would bucket it to three, perhaps
three different kind of pillars and and they continuously are
evolving because as we all know data is continuously evolving.

(03:37):
So as I mentioned, the very starting point was I was just,
you know, morbidly curious aboutthis.
Wait, wait, we can learn, we canactually program.
I want to go into it. And that kind of resulted in
going to the more theoretical side where I was interested with
that fundamental question of howdo you learn?
What is the thing that drives it?
And for the audience and for thepeople listening in, we're
familiar with the different types and the variety of the

(03:59):
methodology. Actually what what, what
interests me and not that I'm promoting or recommending my
thesis, it's pretty antiquated by now, is I was fascinated
about the world of semantic models.
So the premise behind semantic models is capturing information,
capturing knowledge. And my idea and what I got
fascinated about was if you can capture knowledge, actually the

(04:20):
learning mechanism becomes secondary.
So that was one pillar that thentook me to that next dimension.
And I some may argue my first step into the quote UN quote,
the dark side of wait, wait, wait, wait, how do you, how do
you make this real? How do you actually start
implementing it and result in applications in programs beyond
just a piece of paper? And you find that there is a

(04:41):
delta. There is a difference.
I, I used to have, how should I say, interesting debates with my
supervisor who was a theoreticalmathematician.
Well, you know, you had a theoryand it's like, but it didn't
quite work in practice. I was like, but what do you mean
it didn't work in practice? Well, there are other factors
that we need to consider. And then finally, which I'll
kind of say the, the last stretch of the period is not

(05:02):
just so theory, application, operationalization and, and you
can say, David, OK, you're splitting heads.
And I would have pushed back andsay, no, no, there is a delta
because what takes theory to be an application has
considerations, but what takes an application to be
operationalized also has very specific considerations.

(05:24):
And an example from many years ago, and I won't name the person
or the bank, I was asked to go talk about optimization.
And of course I was like, yeah, I'm, I'm, I'm happy to go talk
about optimization. Let's just say I got a, a, a
good scolding. That was my starting point when
I realized actually, it's not about optimization.
It's not about the algorithm, it's not about the application.

(05:46):
It was about how do I use it, How does it make a difference?
How good is it gets integrated in my day-to-day.
So, so in a rough sense, those are the three buckets I kind of
see it as. I I think 1 interesting thing
is, I mean even for me, because I think we are around in the UK
at the same time, when I was working on machine learning and
doing the algorithms, I was sitting right next to the Human

(06:08):
Genome project. So the development of algorithms
may be important, but what we are, you know, tasked to do is
actually to further the data analysis and the annotation of
the human genome. So a lot of that focus actually
forces to be more production level.
And I think that transition is quite interesting.
But maybe just help me understand, when you transition

(06:30):
say from academia to regulatory roles which I never have that
experience, how does it influence your thinking about
responsibility AI? Oh absolutely.
And I can say without a shadow of a doubt and, and to give them
full credit, I learnt of and has, I've absolutely appreciated

(06:51):
to this day the world of governance and regulation.
I mean, if you would ask David prior to work with the
regulator, candidly, I wouldn't even have thought about it.
It it, it won't be even be thereas a concept other than like,
yeah, yeah, you know, there's, there's, there's guidelines
walking into it. I've realized two things and
I've realized it. In fact, I realized it because I

(07:12):
used to get asked this question.Oh, David, don't you think that
governance, compliance regulation, whichever form you
want to call it as is a, a breakon inhibit inhibitor for
innovation and adoption of new technologies?
And that really got me thinking,but really got me thinking
because as you rightfully put, I'm not a regulator.

(07:33):
You know, I, I, I've come into the world of regulations and
what I've realized is actually we need governance, not because
like, oh, it's the right thing to do.
No, we need governance in order to accelerate our adoption and
usage of technology. And let me give you a perhaps a

(07:54):
silly driving analogy. If you know the road, if you
know the rules of the road, if you know the different speed
limits at every single point, ifyou know where the traffic
lights are, if you know where the speed cameras are, well,
guess what? You can drive fast and you can
enjoy it. It's when you don't know where
things are. You have the people who you

(08:15):
know, stop, stop, stop, stop, slow down and you start, you
know, you know, under your breath yelling and like, why are
you going 30 when you're supposed to be going 50?
You know why? That's exactly the point.
And what you find is a lot of times, ironically, because
there's this kind of aversion tothe word G, you know, governance
and people try to not get involved with it, is that's

(08:37):
resulting in things slowing down.
That's resulting in having all of these start, stop, start,
stop, start, stop is by having the proper governance, having
the transparency of it and having the conversation like I
had the privilege of having. And you kind of mentioned feet
as one of the examples of, OK, what is it that we're trying to
achieve? What is the control the OR, or
the, the, the bad thing that we're trying to prevent?

(09:00):
OK, coming from technology, coming from the research, this
is how we can go about achievingit.
So you see now it becomes a constructive conversation rather
then this is what you need to doand you go like it doesn't make
any sense. Yeah.
So I'm thinking that I have two strands of conversation that I

(09:21):
want to engage you. So I I think given that you have
talked about feet, let's start with AI governance and
implementation pieces. So you were instrumental in
developing the feet principles at MASI.
Think Paul Cuban, my mentor who's formerly from DBS was a
part of that committee as well who looked at it.
Can you share the origin story of the framework?
Oh, I OK, I'm giggling because it's a, a bit of an unusual

(09:45):
origin. So as part of the mandates that
I, that was given in MAS was actually supporting and
developing the industry with respect.
It's its option of AI and I, I like going into the what I
believe in, you know, first principles and root cause.
And when you looked at the industry and you spoke with
people individually, yeah, you know, everyone said we do AI.

(10:07):
But then when you really open upthe box, it's like, is it in
production? Is it being used?
Maybe, maybe not. So I said, you know what, rather
than sitting in an ivory tower, presuming that I know or we know
something, let's have a conversation.
And it was an interesting conversation because I took the
compliance offices from the various banks locally to lunch

(10:31):
and they were very shaken. They were like, wait, wait,
wait, why is MAS taking us to tolunch?
And it was because I was really wanted to have an open
conversation as to what are the challenges that they're facing,
what is holding them back or not.
Or it could be as it was an interpretation of what they
think is the intention. And then I had a separate lunch

(10:53):
and conversation with the again,variety of titles from CIOCDOCIO
heads of same conversation of what's stopping what, what's
what's that inhibition? And what I mentioned earlier,
that realization that having good governance is in fact an
accelerant to adoption. That's exactly when the penny

(11:15):
dropped, when it was you're telling me that compliance is
stopping you? OK, let me take its face value.
I speak to compliance and like, well, this is our understanding.
And you realize that you had, you know, every hand was trying
to do the right thing, but no one's gross properly clapping.
And actually what was needed wasa set of, well, let's put this

(11:35):
out in the in the market, in theaudience, in the industry of
this is our expectations. This is what we believe should
be done. Notice though, for the
frustration of some at the beginning, we didn't say how
then the fee principles, it's not prescriptive.
It's providing the objective that is trying to be met.

(11:58):
And, and then let me give you, and you see where I'm going with
this one specific example of, you know, this whole debate of,
oh, can we use sensitive attributes?
And you know, you have differentrestrictions coming with
different stuff. We didn't say you can or you
can't. What we said is you need to
justify it. And why do you need to justify
it? Because there needs to be an
underlying intent behind it. So to make sure there's the

(12:21):
right reason and it's aligned with the organizational
objectives you've seen. So it was a focus on what is it
is your intention, what is it that you're trying to achieve,
and then saying, well, now it's up to you.
That resulted in the ability of having internal clarity.
These are the things we're trying to achieve and This is
why we're trying to achieve them.
And yeah, it could be a debate. And then the people downstream

(12:42):
in terms of implementation go like we have clarity with
respect to the speed limits. And it's not about, you say to
me, no, but David, you know what?
Why is it 50 kilometers an hour,not 70?
Sorry. It is this is what we decided,
OK, fine. Then I will drive 50 and I can
get to my point rather than trying to fight the law and
going 70 and then getting arrested or going 30 because you

(13:05):
see my point. It's creating that clarity.
So it's actually the outcome wasthe realisation that clarity on
governance was needed and that'seffectively what Feat is.
So I think you articulated also the lessons learned from
applying these governance frameworks across organizations
from the government they cooperate with.

(13:25):
So the question then is how do the FEAT principles actually
help, say a financial institutions in If you can give
me some colour to it, then balance with innovation and
ethical considerations for AII. Remember when I was talking to
one of the prominent banks when I was still in AWS and one of
the things that we we did a verydetailed pilot program where we

(13:46):
literally have to do all the data masking, privacy protection
and making sure that everything works and even, you know,
deploying it from on premises into cloud with everything
encrypted. So how does those things
translate into the real world? So, so, so let me try and give
you a few, a few examples of your analogies, but let me start

(14:07):
off by saying, remember what I said in the very beginning, it's
first principles. Why do I keep going back to
that? I keep on emphasizing it is
because we have an and it's an understandable tendency,
especially when it comes to new technology to go, oh, everything
is new. It's it's new invented.
We need to basically it's a blank canvas.
We have to start from the groundup.
Whereas I personally slightly take a slightly more different

(14:31):
approach by saying, well, no, especially when it comes to
governance, when it comes to responsibility, when it comes to
intent, that shouldn't change the objective.
What may change is the approach in achieving it.
Now, again, by saying this, don't get me wrong, of course
there'll be an introduction of new concerns, new risks, 100%,

(14:54):
but fundamentally is the antennasaying and again, let me give
you an example. So prior to it, there'll be this
massive deliberations on, oh, you know, discrimination by AI,
discrimination by AI. Can you use it in A and and loan
algorithm or can we use it in HR?
Can we use it this and what effectively is it decomposed the
problem? It decomposed the problem by
saying number one is AI aside, how do you deal with these

(15:19):
concerns today? And if the answer to that is we
don't, well, there you go. You need to have an approach, a
mechanism, whether it's part of a committee, whether it's part
of a, a principle, a conduct andculture.
Not for me to say to have that. That's number one.
Number two, if you do have it, see now it's the question of,

(15:42):
OK, how do we incorporate the world of AI?
So when you see how they'd innovate us because you say, OK,
yeah, no, we, we acknowledge we have, let's say a principle with
respect to fair, This will soundlike a an oxymoron, but the a
fair discrimination approach. But now in order because we
acknowledge that AI may amplify the potential risk that's coming

(16:03):
for it, we need to have a more resilient manner in terms of
identifying it. Well, guess what I need to know
again, allow me to use as an example.
Let's say the risk is in terms of gender discrimination.
I need to know your gender in order to assure that I am having
a fair base approach with respect to that.
So you see, it now creates a mechanism in order providing

(16:26):
that unknown justification rather than taking the approach
of like, Oh no, no, no, no, we can't do it, we can't use it.
And again, let me give you a well, the famous reference of
what happened with the Apple Card, Goldman Sachs, where they
actually, there was a big who ha, and they actually said, but,
but we're, you know, from a, from, from a regulatory point of
view, we're not allowed to use gender information.
So, and what you find is that when you start using the more

(16:49):
sophisticated type of techniques, actually using
explicitly within a model doesn't mean it's not there.
So you need to use it potentially in order to
identifying the risks that are correct.
That's one particular example. The other one is an ability of
representing the application with an organizer.
I still remember this. I had a conversation with two

(17:10):
because we had this little Tri party debate where one reviewed
the apply the potential application of AI within their
HI practices and decided it wasn't a question of can we or
can we not do it because obviously you can.
They decided we don't want to because of our our approach, our
conduct, blah blah blah. Whatever it may be.
We are actively choosing not to leverage on these capabilities

(17:34):
for right or for wrong. Not an issue because it's
aligned with a certain manifesto.
They have another organization said, well, we have chosen to
use AI and we are cognizant of that potential discrimination.
But actually it's OK because we're specifically looking at,
let's say the program which is anyway designated for women, so
it's not relevant for men, etcetera and so forth.

(17:55):
So you see, it's kind of forced us to think about the intent.
And that's what I meant by we get distracted by technology.
Like for example, now what happened with Gen.
AI? Oh, we need to worry about Gen.
EI governance, Gen. EI governance.
And I kind of coyly came up by saying, well, again, it's not
that it's not introducing new risks.
Let us take the beautiful world of hallucinations.

(18:16):
We, we had that little conversation about the
hallucination may be a buck for some and a feature for others,
as you said. But actually, again, first
principles, what happens when you have an intern that does
something weird? That's something wrong.
You see my point. How do you handle that?
I remember a very good lawyer friend of mine, you know, he was
slightly up in arms and the factthat go Jenny, I is competing

(18:38):
and he said, but you know, all Jenny I is doing is putting
words together in a way that seems reasonable and logical.
And I can in in a slightly characteristic fashion, cheekily
looked at him and said, well, isn't that what you as a lawyer
does as well? And he said, touché, touché.
But you see my point? It's effectively forcing people
to go back to first principles and don't reinvent the wheel.

(19:00):
These are things which we must address and understand 1st and
then apply methodologies in order in terms of either
addressing them or preventing incertain cases.
So what was the biggest misconception for most business
leaders usually when it comes tooperationalizing AI governance
then? Oh, wow.

(19:22):
I think to me, based on my interactions and conversation is
that now suddenly they have to erect the whole set of new
committees that they have to have these new programs that
they have to it's like this, this it's it's almost like you
almost hear a sigh, you know, from the room like we have now
this whole additional compliancecost because we have to do all

(19:43):
this new things. And the reason I see that as a
bit of a misconception, because again, building on everything
that was just said earlier, it'syou already have, you know,
compliance, you already have committees, you already have
governance here. It's an integration of that.
Because otherwise, guess what's going to happen.

(20:03):
We all know that this is the next thing around the corner
that's going to pop up, you know, whatever it's going to be
called. Are you going to have to set up
a whole new committee just because of that?
And then the next thing, anotherone, I mean, it's just.
And we have to be pragmatic. I just want to switch the
conversations a bit and I think now I want to get into the AI
within the say financial institutions or the corporate

(20:25):
setting. I think given the your corporate
experience, what is currently the mental model on AI adoption
within financial institutions given the few is changing almost
practically every week. I think just all kind of
interesting things. We sit in a couple, well,
WhatsApp groups, you know, with all the AI builders and the
amount of speed that's coming across, everybody is like, Oh my

(20:46):
gosh, what's going to happen next week?
No, you're, you're absolutely right.
And I think this is where if I maybe, well, I, I think there
everyone's struggling. I mean, I'm, I'm, I'm happy to
get pushed back as I don't know,David, we've cracked the code
code and it's, it's, it's sold for us.
But I think if you look at it from an enterprise point of view

(21:06):
and again across the industry, Ithink it's fair to say everyone
is is struggling exactly to yourpoint, because it is just moving
so damn fast. And there's still a mental
mindsets or a shift that needs to be taken from, you know, the
SDLC, the software life work time development cycle to, to

(21:29):
OK, how do we now set up an environment and a platform that
allow us for this kind of elasticity?
And candidly, that's still again, obviously not in the in
the tech world, which had the luxury of kind of setting the
foundations right. If I may again be fair, it's a

(21:49):
bit, it's a bit tricky. It's a bit tricky when you you
already have a monolith, when you already have a lot of legacy
systems, when you have a lot of technical debt.
But on that same vein, I think that is exactly where the shift
has moved. It's not just about the use
cases and applications because let's, let's let's just be
transparent. I think there are many out
there, but in how you create this, this operationalization

(22:15):
environment. And by environment, by the way,
I'm kind of referring to the full stack of not just
infrastructure, not just platform, not just bedded API
gateway capabilities and not just people, but it allows us to
go, OK, we want to do this use case.
We want to do this use case. We want to do this use case.
Oh, hold on a second. There's been an advancement.
Now it's, you know, GPT 10.12, OK, how do we plug EC?

(22:37):
So it gives you this modularity within a governed and safe
mechanism of review. I would say that's the mindset,
that's the goal. But as with everything, the
devil's in the detail and it's not always that easy to do.
But how the like thing I mean for banking environments because

(22:58):
of the way how financial institutions are regulated, one
big trouble is actually in data leakage.
I see it with companies that arevice.
I guess maybe I just want to hear a much more practitioner
point of view, right. How do you think about say you
want to operationalize say AAI model within an environment is
so highly secure, robust, resilient.

(23:20):
I think the requirements are extremely strict and I and I and
I really don't envy the person who we know having the trouble
to think through all that. What's the thinking from your
perspective when you try to makethat kind of engagement?
So it's the bank itself? So that's actually a very
interesting point. And, and I'll be candid, I, I

(23:41):
think it's kind of also an evolution in my thinking as
well. And I, I don't want to sound
like a parrot, but actually go ahead and get back to the first
principles. A specifically a financial
institution, the regulated entity risk is nothing new
because that's finance. You estimate risk, you price
risk and you engage. And if you think about if, OK,

(24:03):
let's focus on the top line, let's talk focus on on the
services side for a second. And I'll get back to the to the
core of your question, but I just want to set the premise
right. We do very risky things.
I mean, we're, we're, we're buying debts, we're, you know,
buying potential futures. We're making bets.
We're doing pretty damn risky stuff that can go wrong.

(24:26):
But, and this is a this is a very important part is we know
how to estimate it, we know how to review it.
We know how to create frameworkswithin it.
We know how to go yes, I know what is the risk I'm exposed to
and I know how to mitigate it and I know what to do when it
does happen, if it does happen. And that's why regulators go,
yeah, go ahead, please do your work.

(24:47):
I don't think, and again, I don't want to be unkind, but
this is I I said it's this is anevolving thought processes.
I don't think we apply that sametype of discipline and maturity
on the backside of the house and, and notice exactly like you
said, it's like 100% security. You know, I remember I was in a,

(25:08):
in a conference once and we weretalking about data leakage in
terms of deanonymization of data.
I said, if you legally require the inability to deanonymize
data, don't do it because 0.0000001 is still a risk.
So we take this, this extreme view, which as a sign point is
ironic because I actually think that increases the risk.

(25:30):
But again, that's a different conversation.
So my recent thinking is no, no,no, we need to take the same
mindset that we have on the front office to the back office
and go. So it's not about 0 risk.
Step one is understanding the risk.
Do we understand it? And then you suddenly realize

(25:50):
not all risks are the same #1 #2is.
But what is that we're trying toreally mitigate #2 #3 what is
our risk appetite? I'm sure you've heard the term,
you know, risk based approach, but it's said risk based
approach. But then when you look at it,
it's like, yeah, but you're trying to achieve 0 risk.

(26:10):
That's not risk based. So you, you, you get my point is
so, so, so, so then it's ultimately obviously we want to
eliminate, we want to mitigate leak, of course, but there needs
to be a pragmatism based approach towards it.
And ironically, I actually thinkthat will make it more resilient
and safer doesn't mean that bad things don't happen because

(26:32):
again, we live in the real world, but it means it actually
will be safer because we're cognizant about it.
And when things go wrong, we know calmly, what do they always
tell you when there's a fire happening, when there's an
earthquake? What they always tell you, stay
calm. It's when you're overreact.
And so same thing. We can calmly know how to deal

(26:54):
with it and how to mitigate it. That's kind of the lens that I'm
applying and that's kind of I'm trying.
I'll tell you whether it's successful or not.
No, I'm quite curious, right. Like for example, if you think
about the large language models even within the AI space today,
a lot of the large language models, one of the question is
the open source and closed source models, right?
And enterprises meet certainty, right.

(27:16):
So one of the things that you look at like Entropic does is
that they provided through say acloud provider, whether it could
be Azure, GCP or AWS. And then there is a kind of a
policy saying that whatever you do with this here, it doesn't
get there's no data leakage intothose environments, right.
So there is that one question. So that there's one layer of

(27:38):
complexity that you need to do with.
Then there is a second layer where the CEO has gotten
productivity gains through usingchargeability and decided that,
hey, you know what, everybody should turn on their Co pilots
only didn't realize that they forgot to turn on the security
switch and everybody gets to read the calendar of the CEO and
the emails of the CEO, right? Where is that?

(28:00):
How would you like, do you applya first principle approach into
risk based on each one of these situations or or is it something
even much more and say, OK, thisis a specific enablement use
case, this is a development environment use case and this is
something else can be a customerservice use case at some point?

(28:22):
So I think the honest answer is,and because, and look, I'm this
is, I'm very happy to learn fromothers, but I think we're still
in the evolving process because one thing I kind of forgot to
mention earlier is, and people should not underestimate his
trust. Like an example, it's like when
you go to a doctor, does doctorsalways get it right?
No, sometimes they get it wrong and sometimes pretty
dramatically wrong. But we fundamentally have a

(28:44):
mechanism of trust. The problem that we have and,
and, and I'm segueing this into into your question here that we
lack with all the excitement of the world of the eye is trust.
We just simply did not have the time to build it.
And that's The thing is we there's certain things that we
can't skip in that process. I know this sounds very

(29:07):
emotional, but we're human and we have to remember.
No, no, that's that's the core of.
It you're absolutely corrounded that because Ping, who came on
the show in a previous episode, made exactly the same argument
on there is about trust. Whether AI application take off
is not a question of because thetechnology is so commoditized.

(29:29):
The question is whether how muchtrust you can build with the
user. So in that vein, the approach
that I still prefer to take is ause case based approach because
what it allows you is to start having that elasticity.
So for example, you would say tome, hey, David, you know what?
This, this, this productivity gain, Let's see.

(29:49):
Let's take the money man copilot.
It's awesome. I'm an activist.
It's like, OK, fantastic. How about we do it for your
team? See my point?
Yeah, yeah, yeah, let's let's see.
Because then if something goes wrong, OK, at least it was in
your team. Yeah, that's right.
Let's come. To you remind me when I was
doing digital transformation, right?
And I have an agreement with theCIO say, hey, you know what,

(30:12):
we're going to start with slag. And as I said, well, your team
can try it first and then let meknow when we can bring it over
to everyone. And, and going back to the point
of framework is when you have so, so I almost see the whole
process of AI and I, I try, I try and look at things in a way
which results ultimately in an assembly like, and the reason I

(30:34):
do that is because it allows forsome extent of consistency and
replicability. So both in terms of development
of AI, but then also more from astrategic point of view,
basically the strategy of AI andas part of that assembly.
Exactly. It's just because we're doing it
for, let's say, your department.Are we doing it for your
particular use case, let's say banking?
So, so let's say treasury liquidity management or a

(30:56):
particular, a bank assurance andget whatever.
It doesn't mean that I can't scale it up rapidly, but at
least creates these, these deliberate conscious checkpoints
to kind of see, OK, are we goingin the right track?
And then to backtrack, because let's say I'll give you a
perfect example as we're doing it, we may find that just like
with the case of HR on that particular institution, I'm

(31:18):
sorry, Bernard, we're not going to switch on with Copilot.
I'm sorry, Ms., we're not going to switch on Copilot for you or
we're not going to do this use case for you.
And the reason we're not doing it for you, but we're doing it
for them is because ABC and D again going bad about setting
the boundaries. And I was having this wonderful
conversation with with Simon, full credit to them.
And we were talking again about speed limits and they say like,

(31:38):
oh, you know, the speed limit is, let's say 70 kilometers per
hour, but we're only going at 40.
I said, but hold on a second. Just because the national speed
limit is, is 40 kilometers per hour is set 40 fast or slow?
Maybe as an organization, not right or wrong.
We're saying we don't want to go70, we want to go 40 or the

(31:58):
certain parts of the organization because they're
driving this massive semi trailers.
You know, it's like, no, if evenif you go at 70, it's really
going to be so risky. Whereas a small little MG, you
know what, go for it. You can go at 70, even 71.
You see my point? It's about contextualization.
And this goes back to the premise and mindset from SDLC in

(32:20):
how we used to lead do tech and it's we build and we roll out AI
has fundamentally almost, you know, which I think of the right
anecdote, but you know, just just just just fundamentally
changed the mindset that it's now contextual to the level of
application. It is contextual.
It depends. So going in by thinking, oh,

(32:43):
we're just going to have this one ring to rule the rule them
all. I'm sorry, but my personal point
of view fail. And that's going back to feet.
It's, it's one ring in terms of spirits, in terms of Bernard, I
want to make sure you do not cause harm.
How you take the approach to notdo harm, I'm sorry, but that's

(33:07):
up to you because it depends andit's contextual to you.
And now you also see that internationally again, no right
or wrong. What China's doing is not right
or wrong. What US is doing, it's not right
or wrong. What's Europe is saying is right
or wrong. It's contextual to them because
they all want to achieve the same goal of do no harm.
So I think if you will think about practical examples of AI

(33:30):
implementations and not, I thinklet's not talk about just your
organization's given given giventhat.
But maybe why don't you talk about say very good interesting
examples you have seen outside or implementation that even
illustrate some form of value creation.
I would get this question from, I always get this question from

(33:51):
a lot of CE OS and sometimes some of them, they're really
doing it. And then I'm also quite
embarrassed to say, hey, actually your AI implementation
is pretty interesting. I would like to learn more.
This is what I meant that, you know, to, to, to really be
candid and friends. So some really exciting stuff
that's been out there and, you know, and I've recently been
learning a bit more in terms of the also the operational side of

(34:12):
the house. For example, you have and this
is in production use case where you have literally fully
autonomous agents in terms of cash reconciliation, almost
think of it as self healing. So when errors occur, usually
would have a person will come into do some reconciliation or
recorrection. It's not you literally have
agents which have a bank ID. They have a bank e-mail that are

(34:32):
able to do these micros tasks aspart of a large console.
So that is absolutely mind blowing seeing it in operation
all the way to your front end line.
Again, this is, you know, finances is is a, is a, you
know, it's same, same. It's about identifying products.
It's about hyper personalization.

(34:53):
It's just doing it. So the ones which actually got
me excited, ironically, was bankassurance.
Why did it get me excited? It's because no one likes to
talk about the mortality and it's a very heavy product.
Yet they used AI to really be able to identify the cohorts of
people need it because you're planning a family, you're going

(35:14):
to the next stage of journey. So just like, you know, you
know, food apps and whatnot, I'msaying like, oh, you want to get
from this discount, same thing and bring it to life and seeing
the actual impact to the area which I do carry from, you know,
my ex life in the regulator of financial crime.
I have seen some applications which candidly blow things off

(35:38):
the water. Let me give you 2 concrete
examples. Transaction monitoring today,
our solutions. And I'm I'm not throwing mud
because it is what it is 97% false positive.
So for every transaction alert that is flagged, actually it's
OK, that's a lot of noise. Imagine you're dealing with a
system that has 90% noise ratio to actually correctness.

(35:59):
I've seen overlay of AI that hasbeen able to drastically reduce
that by up to 50%. Now that don't just think of it
from a banking operational efficiency point of view, think
of it from a customer service and enjoyment level.
We're now when you're overseas and you're trying to buy, you
know, a friend dinner or your wife and your card dates

(36:19):
declined because it incorrectly got flagged.
Now suddenly it's a lot more fluid all the way to
preemptively mitigating the risk, like not to jump the world
of cybersecurity, but as those who deal the world of
cybersecurity, you know, zero day attack.
So when they've already infiltrated the system usually
happens on average about 100 andwhat is it 130 to 180 days

(36:41):
before the actual attack, it just sits there.
So again, using these agents to identify these vulnerabilities
before the incident occurs. So you see it's shifting this
mindset and the ability of operations for the kind of
reactive mode to things. Around.
And effectively, we're correcting, we're adjusting,

(37:01):
we're seeing this left satisfaction and then finally
really going down to the level of providing the level of
service at the level of service that is needed.
And then a final point that I'llmention on this is we're
currently currently in a very interesting situation whereby we
have no choice because we as consumers, individuals to
corporate, we are using these deals.

(37:22):
I'm like you just mentioned, yougot, you know, GPT on your
phone, you're like, so you literally go to your bank and
branch. It's like how how can you not do
that Here? Let let let me help you.
You know, it's, it's, it's, it'sa level of expectation that has
gone up. And I can tell you candidly,
it's now a competitive advantagebecause the provider that can do

(37:42):
it will just naturally get the audience because like, yeah,
that's the level of service, that's the level of engagement
that I expect as I can do it at home.
I have a question like if that'sthe case right, then what about
like if you you're definitely building a team to in order to
help, say other teams to be ableto be AI enabled, What kind of
skills or roles have actually become critical?

(38:06):
I I get a question all the time even.
Yeah, yeah. Somebody, yeah, somebody send
all the CC people come in and then talk after being blown away
by the AI. Then the first question asked
me, how do I get that downstream?
And then I say, I think, you know, there's some things that
do not change. Management, your company
culture, how it gets people to adopt technology that doesn't

(38:27):
change and that has to be adopted the same way as you have
done it before. You see my nodding 100%.
Let me let me break my responsesto two parts.
Let me talk about the people andlet me then talk about something
you just said right now, which I, I again, I want to really,
really emphasize because I feel sometimes because this world has
seen very technology, we actually forget about it.

(38:52):
I don't think deliberately, but just we forget about it.
So the first one is back to my point that I kind of, I try and
see everything as a supply chain, same thing.
AI is not one thing, it's a supply chain.
So you do have to have all the way from your infrastructure
people, again, whether or not you're a cloud advocate or you
have your own, you do need to have the people that understand

(39:12):
the underlying architecture of this stuff because at the end of
the day, everything rides on topof it.
Two, world of Gen. AI, which in fact the word AI
can be a bit misleading. It's a lot of full stack
software development, but also from an integration point of
view, of course, as you kind of go up the supply chain again,
look, I've seen various ways of calling them from, you know, AI

(39:34):
translators to business thing. I mean, I, I mean, I've kind of
recently defaulted back to the term just product owner AI
product or just to make it simple.
But it what it simply is, is a profile that you know, knows
enough about the technology to be dangerous, but understands
that can communicate with the business and vice versa.
Because also you don't look, I used to joke and say that if you

(39:56):
can find a data scientist, like I used to say this in the past,
that can understand and do the world of data science, you know,
code implement and do that. And at the same time and stand
in front of the business and coherently explain that simply
in their terms. That is a Unicorn.
Lock them up in the room and don't let them believe because

(40:16):
that is very rare. Usually you'll find that the
people are very good at different stage on that
horizontal. So that's necessary.
And then to your second kind of the, the, the part which you
kind of a touching on actually, I know that I come from this
world. So it's kind of easy for you to
say, as one would say, the tech is easy, it's the people, it's
the process. It's like, for example, remember

(40:38):
we just spoke about, you know, the agents and you know, or
reconciliation, etcetera and what not.
How do you not? Just change the today's process.
How do you fundamentally reimagine it?
How do you reward existing, sorry, new managers from the
existing manager? What's their KPIs?
You know, if you go away to the extreme, let's say we have
organizations where by the way, you get promoted, Bernard, is

(41:00):
when you've gone from managing one person to managing 100
people. OK, I get promoted because of
but now suddenly I'm still managing one person, but I'm not
I'm I'm also managing 90 bots. So this is the agents, 90 agents
exactly. When you have this massive, it's
almost like an organizational talent necessity of how do I do

(41:21):
that? And, and, and I'll give you one
final example. I was, remember I was, I was in
front of an audience not too long ago, I think maybe a month,
two months ago. And we were talking about the
culture of innovation. And I said, OK, let's talk about
the culture innovation. And I looked at them and I said,
do you reward your staff today of failure?
And they just stared at me and said, well, David, what, what do

(41:42):
you mean? Do we reward them for failure?
I said, no, did literally, I'm, I'm being as literal as
possible. Do you reward them for failure?
And obviously I'm saying it's like provocatively, I said, OK,
let me double click. Now, when I say failure, I'm not
obviously talking about that you're, you know, maliciously
or, you know, negligent manner resulted in issues.
Obviously not. I mean, that's a, you know,
fireable offense. But let's say I come to you and

(42:02):
let's say, let's just say for let's parole play for a second,
you report to me and say, but not I'm going to give you a, a,
a, a mandate. I want you to roll this use
cases of AI across the business,blah, blah, blah, blah.
And you go ahead and do it. Well, guess what?
Remember what I just said? It's contextual.
Just because all the pieces of the puzzle doesn't mean it will
work right? Because maybe predict in a
certain manner. Maybe the level of gain is not

(42:24):
as as in we envisage life happens.
Do I even still reward you? Do I give you a healthy bonus?
Because you've done a damn good job in trying to do that.
Yet it did not succeed. I will error on the side of
saying probably not. I see now that's something that

(42:44):
has to change if the intention is from an innovation.
Again, this is a personal point of view.
It's about the intent in which we're trying to move.
We can never neglect or forget about the human element.
The irony for me, and remember what was my starting point, what
drove me into the world of AI, is understanding human, is

(43:05):
understanding us. And if there's one thing I've
discovered over the years of doing more and more about AI, it
is about us. And yet we sometimes forget it.
Yeah, I once asked my class course I teach.
Generative AI in the university,I asked everyone and because I
drop into this study about hallucination, they know which

(43:26):
part of the neural networks is actually turned on when the
hallucination showed up. And then I asked everyone,
should we turn it off if we knowwhere it really is and everybody
is like 5050 on it. And then I said, well, there is
actually study in psychology andother disciplines, right, that
shows that actually experts havethe highest tendency to
hallucinate because a medical, you can think of it very simple,

(43:49):
right? A medical specialist
specialising in cancer doesn't encounter something that is not
in the textbook. Well, Bernard, I'm.
I'm willing to go on a limb and better lunch, which I'm happy to
have anyway, that almost all, ifnot most of you know, leaders,
innovators at the point of inception were challenged by

(44:11):
people around them by saying you're hallucinating.
And and one that just pops into my mind by saying electric
vehicles, there will be a commoncommodity going to outer space,
etcetera, etcetera, etcetera at the at the point of inception of
the idea, people look and go like you're hallucinating.
It's like that's not going to happen.
Yeah, that, that is the whole. Point right?

(44:32):
And then, then, then, now you, you have to impose such a strong
standards onto the AI they implement.
They don't. You find them very ironic.
This is why hallucination for some is a feature but others are
bad. Therefore it's really important
to be self aware as to what is it that you're trying to
achieve. I know it sounds it sounds

(44:53):
borderline. Sorry, complete tangent, but
talking about talent, I wasn't auniversity round table and we
were talking about the necessityto train more about AI
governance for students to help promote a culture of, you know,
innovation and AI. And I kind of thought about it
for a second. I said, actually, you know,
something I I well, obviously conceptually one could not

(45:17):
disagree. What we need to teach is
philosophy. What I need to encourage you is
to read more about Greek mythology because if anything we
need to have more thinkers. If anything we need to ask more
questions because if anything AInow facilitates that.

(45:39):
When my best friend, you know, from literally pre kindergarten,
called me up in a frenzy and said, Ah, you see, I told you
you're going to be the reason for the end of the world.
You know, I'm she's a teacher. And it's like, you know, and
these Gen. AI and these Jasper's are going
to basically get me fired. And I said, no, if anything,
it's challenging you to teach more is to when we keep on

(46:02):
saying that, Oh, it's critical thinking that is important for
the future. Well, guess what?
That's exactly what you have time to focus on, because the,
you know, model answers and the easy stuff doesn't work anymore.
Let the agent handle it. Yeah, I, I have.
I have. One more question that's quite
important and I think this is a business conversation that a lot

(46:24):
of business owners like to ask me how, how do you measure your
ROI or value from AI initiativesbeyond cost savings?
Oh crikey. That's a fun chestnut, that one.
OK, let's start with it's very important to kind of understand
the teams rise on death. The the reason for being why do

(46:46):
I say that? Because if the team's purpose is
innovation, there is no such measurements.
In fact, anything I would say you're, you're you kind of
deluding yourself because innovation, your revenue, your
potential, you know, quantifiable benefit is, is it's
a it's a downstream impact. And look at the world of AII

(47:08):
mean there's a stuff that's beenhappening since 19 forties,
1950s Hello. So the way of KPI, the way of
measurement needs to be accord. However, for the rest, which is
really about adoption, which is about implementation, it's about
operationalization. I do believe in, I don't know
why this might sound a bit unfriendly, but I do believe in
enforcing A quantitative and forsimplicity financial monitoring.

(47:33):
Now even let's say, let's say, so I build a solution that helps
a frontline line in terms of selling more to customers.
Obviously, they're the P&L holders, I'm just in the
background, but I will still spend time initially to to
compare to look, OK, what was your baseline, let's say for the
last three months And now what is your baseline, the CV using
these capabilities to basically be able to report that delta,

(47:57):
But then of course, it gets normalized similarly on the cost
saving sides outlook. So I would look at it on both
fronts and I'll create a mixturebag of the revenue, the impact
from cost savings, but as well as also the revenue impact from
so so in otherwise your top line.
And they're also risk, by the way.
So we talked about, let's say, reducing the number of false

(48:17):
positives. I would also measure that or
mules detection in a financial gains because in terms of how
much money have I helped protectyou see so and the reason is
because it's easy easier to to interpret it and then to be fair
to the organization, it becomes easier to identify like
actually, you know, something that area just doesn't make

(48:38):
sense. Yes, we can, we can do this, but
candidly just no, it helps prioritize.
I see. Yeah.
OK, Please continue. No.
No, no, I just want to say just.The last point about cost saves,
yes, we do it, I do it, I've always done it.
But AI in the end of the day is really about knowledge.

(49:05):
It's about insight. I, I remember I had a not debate
because it was, it was more my, my boss at the time giving me a
scolding. So it was A1 sided debate where
he said, well, David, you know, you've rolled out this
initiative for, for operations and you know, we, we claim that.
So in this particular case, it was we were able to reduce the
number of times you needed to call a customer in order to

(49:26):
result in the collection. So let's say if I previously had
to make 5 calls before to collect $1.00, now I only have
to do 2 calls to collect that $1.00.
So yeah, cost saving, but it'll be like, but David, this still
have the same number of people and see This is why sometimes it
gets hairy. I'm a single look.
Number one is I can't control the usage of people because I'm
ultimately a provider of a capability and it cannot detract

(49:50):
the outcome of the capability that it results in that less
need. But number 2 is the reality of
today is that this will probablyresult in more insight about how
the consumers are behaving. And you would find that you're
assigning the people to do that to, to work on that additional
insight, ways of creating more processes, whether it's be
creating more services. My point is that we shouldn't

(50:14):
undermine the value add created by knowledge.
If you see where I'm going with this.
Yeah, yeah. I no, I I get that.
Point. So yeah, I think the, the, the,
the point of it is not in the cost savings.
It's the way how you're going touse that knowledge to create the
value. Exactly.
Exactly. It it's not like a, you know,

(50:36):
net, net zero kind of scenario that, that, that you, you say
you, you shave here, but you gain here.
So it comes together. So this comes to the.
Final question, right. How do you see the role of AI
now evolving for the next few years within financial services?
Is it going to be a pretty gradual or or it's going to be a

(50:58):
very, very fast track similar toa lot of other industries who is
now trying to work out what theywant to do with AI?
Well, I I can tell. You what my hope is my hope is
I'll be fast tracked because look everyone's investing.
I, I think you need to be livingunderneath a rock that's
underneath another rock to discredit the, the, the tangible

(51:22):
value that has been provided. And this is actually a facet.
This is a fun time to live because before it was more
difficult to explain, to visualize, to demonstrate the
value of using these knowledge based systems.
Now it's just, it's just so easy.
It's like the way we operate, the way we do things on a
private basis. So I would love to see that

(51:43):
accelerating. Now realistically, of course,
there'll be your, your, you know, your, your, your shakers
and movers and your followers. And that's how it always will be
because candidly goes back to the very earlier points that we
discussed about your existing environments, your, your, your
technical debts, legacy. I mean, it's, it's a, some of

(52:04):
them will be existential questions because you may look
at your house and go like, wow, you know, I can renovate it, but
it's going to cost me 10 times more than just tearing it down
and building a new one. But I can't really tear it down
because I have people living inside, you see.
So it's, it's, it's it's questions that go beyond again,
technology and AI and empoweringskill sets.

(52:25):
So David Mann, thanks for coming.
On the show before that, I have two closing questions.
The first one, and I call it theIsha's question.
On a personal level, how do you use AI tools or platforms to
enhance a your leadership, decision making or productivity?
OK #1I. So so I I was in last year
something I got a a tick on X soI got access to grok.

(52:45):
I use grok extensively. My preferred approach of using
it is I don't have grok write for me, but I write, but I have
grok essentially help me in terms of grammar, in terms of
recommendation, because what's also nice is that he actually
looks at my historical style again, I'm not just grok.
And he goes like. Well, David, this has been.
Your style and some changes. So it's actually kind of helping

(53:06):
me as part of again, my evolution.
Secondly, and again, I'm not shamed to say it is like I like
to test ideas. So one prompt, which by the way,
I recommend to everyone is literally end with ask me
questions as part of this conversation.
So it's not just no, no, seriously.
You know, I know the heck is asking.
For question, ask before answering because you wanted to

(53:28):
ask you clarify questions to prove you towards what you want
because a lot of people understandably.
Like, oh, what's this? So how's about that?
I let you say so I remember I had conversation about, you
know, foreign currency exchange,geopolitical, you know, tiny
wines I'd like. And it's, it's fascinating
because again, it goes back to my underlying premise, my first
principle. It's about knowledge.

(53:50):
It's about how do I use it to improve me.
And back to your point about teams also how to engage, how to
provide them, then I'm really surprised.
You are not using Entropics Cloud because they is a default
for them like I said it. Goes back to ease of access.
I see it goes back to ease of access.

(54:10):
It's like, you know, when I mean, let's let's OK, the
reason, the reason there was, you know, antitrust lawsuits and
all that, but not going down thepath.
Yeah. But when you get an Android
phone, it already comes with a Google browser.
It's just there. I just, it's it's not that I
wouldn't use cloud if I had it. If it's one, I would.
It's just happens to be that I'musing it.
I decided to be a. Professional user because

(54:30):
enterprise wise I'm using it to to help companies to do their
financial automation. But of all the of all, the tree
is my most preferred when it comes to deep thinking.
I agree with you. God is really good.
It could solve theoretical physics problems, but there's
some stuff it's not really. Yeah, yeah.
Yeah, but but. The math part for some reason is
better between between net 3 by 7.

(54:52):
Yeah. OK.
So, so, so maybe kind of. As we're telling off just to
add, which is actually going to be quite interesting.
I really, I mean it from a, froma behavioral perspective, how it
evolves is we're now faced with so Once Upon a time is like, you
know, now you have what Disney plus, Hulu, Netflix and it's
just different content and you basically have, let's say
multiple subscription or say, no, no, I'm just, I'm just going
to stick with Netflix. But now it's not just you just

(55:13):
have different, let's just simplify it and call it agents.
Their capacity is different. Their, their, their knowledge is
sometimes even a bit different. The way they interact with you
is slightly different. So I'm kind of wondering whether
you will find that you will get this, whether, you know, if
people kind of subscribe to one rather the other, whether they

(55:34):
kind of almost, again, I'm I'm stretching here.
The reality, you know, evolves slightly differently in the way
they're thinking, the way they're doing things.
Or On the contrary, you'll find aggregators by saying, you know
what? I am now giving you the ability
to pose your question or have your have your dialogue or or
search for information as a super note that basically gets

(55:55):
it from yeah, yeah, that. There is an app for that.
There you go. There you go.
Because. Think about this like, why do
companies, why do people range handles discussions?
Why do you have a survey? Why do you speak to multiple
experts? It's because you're like, OK,
let's let's, let's get a few views and let's challenge one
another. Let's do that.

(56:15):
So that's going to be really interesting.
I think that's going to be a very interesting development.
My final short question. What's the one question that you
wish more people would ask you about AI?
Why? Why?
Why listen? Just why?
Like in everything, to go again to the why, Why are we doing it?
Why in terms of it's like literally why?

(56:37):
That to me is the biggest one. Do you have a?
Do you have an answer to that? I do.
It depends. OK, David, Many thanks for
coming on the show. And how how did my audience find
you? LinkedIn.
I know LinkedIn. I got a website was.
At davidreadhardoon.com. But please, I'm, I'm, I've

(56:59):
succumbed to the visibility of the World Wide Web.
So, OK, so you can definitely. Find this podcast on YouTube,
Spotify and LinkedIn. And of course, we're gonna we're
gonna continue talking. So, David, Many thanks for
coming on the show. And I shall also say thank you
and thank you very much. Cheers.
Bye. Bye.
Bye.
Advertise With Us

Popular Podcasts

CrimeLess: Hillbilly Heist

CrimeLess: Hillbilly Heist

It’s 1996 in rural North Carolina, and an oddball crew makes history when they pull off America’s third largest cash heist. But it’s all downhill from there. Join host Johnny Knoxville as he unspools a wild and woolly tale about a group of regular ‘ol folks who risked it all for a chance at a better life. CrimeLess: Hillbilly Heist answers the question: what would you do with 17.3 million dollars? The answer includes diamond rings, mansions, velvet Elvis paintings, plus a run for the border, murder-for-hire-plots, and FBI busts.

Crime Junkie

Crime Junkie

Does hearing about a true crime case always leave you scouring the internet for the truth behind the story? Dive into your next mystery with Crime Junkie. Every Monday, join your host Ashley Flowers as she unravels all the details of infamous and underreported true crime cases with her best friend Brit Prawat. From cold cases to missing persons and heroes in our community who seek justice, Crime Junkie is your destination for theories and stories you won’t hear anywhere else. Whether you're a seasoned true crime enthusiast or new to the genre, you'll find yourself on the edge of your seat awaiting a new episode every Monday. If you can never get enough true crime... Congratulations, you’ve found your people. Follow to join a community of Crime Junkies! Crime Junkie is presented by audiochuck Media Company.

Stuff You Should Know

Stuff You Should Know

If you've ever wanted to know about champagne, satanism, the Stonewall Uprising, chaos theory, LSD, El Nino, true crime and Rosa Parks, then look no further. Josh and Chuck have you covered.

Music, radio and podcasts, all free. Listen online or download the iHeart App.

Connect

© 2025 iHeartMedia, Inc.