All Episodes

August 1, 2025 53 mins

When AI systems hallucinate, run amok, or fail catastrophically, the consequences for enterprises can be devastating. In this must-watch CXOTalk episode, discover how to anticipate and prevent AI failures before they escalate into crises.

Join host Michael Krigsman as he explores critical AI risk management strategies with two leading experts:
• Lord Tim Clement-Jones - Member of the House of Lords, Co-Chair of UK Parliament's AI Group
• Dr. David A. Bray - Chair of the Accelerator at Stimson Center, Former FCC CIO

What you'll learn:
✓ Why AI behaves unpredictably despite explicit programming
✓ How to implement "pattern of life" monitoring for AI systems
✓ The hidden dangers of anthropomorphizing AI
✓ Essential board-level governance structures for AI deployment
✓ Real-world AI failure examples and their business impact
✓ Strategies for building appropriate skepticism while leveraging AI benefits

Key ideas include treating AI as "alien interactions" rather than human-like intelligence, the convergence of AI risk with cybersecurity, and why smaller companies have unique opportunities in the AI landscape.

This discussion is essential viewing for CEOs, board members, CIOs, CISOs, and anyone responsible for AI strategy and risk management in their organization.

Subscribe to CXOTalk for more expert insights on technology leadership and AI:

🔷 Full episode and summary: https://www.cxotalk.com/episode/when-ai-goes-off-the-rails-an-executive-action-plan
🔷 Newsletter: www.cxotalk.com/subscribe
🔷 LinkedIn: www.linkedin.com/company/cxotalk
🔷 Twitter: twitter.com/cxotalk

00:00 🤖 Introduction to AI Challenges and Expert Backgrounds
01:08 ⚠️ Understanding Generative AI and Its Risks
07:11 🔍 Trust, Skepticism, and Responsible AI Deployment
10:51 🤖 AI Risk Management and Corporate Integration
14:01 ⚠️ AI as Insider Threat and Workforce Adaptation
20:09 🤝 AI as a Sparring Partner and Policy Implications
21:52 ⚖️ AI Regulation and Public Trust
25:27 ⚠️ AI Risks and Corporate Governance
29:41 🌐 Agent Management and Spatial Web Protocols
31:45 🚀 AI in Space and Business Applications
33:28 📜 AI Policy Creation and Regulation
40:02 🌱 Opportunities for Smaller AI Players
42:04 🤖 AI Innovation and Edge Computing
43:28 📊 Future of Jobs and Data Stakeholderism
49:42 ⚖️ Ethics, Misconceptions, and Trust in AI
53:05 🙏 Closing Remarks and Gratitude
53:21 📅 Upcoming Shows and Farewell

#AI #ArtificialIntelligence #AIRisk #AIGovernance #EnterpriseAI #AIStrategy #CyberSecurity #DigitalTransformation #AIRegulation #GenerativeAI #AIHallucination #RiskManagement #CXOTalk #TechLeadership #AIEthics

Mark as Played
Transcript

Episode Transcript

Available transcripts are automatically generated. Complete accuracy is not guaranteed.
(00:00):
AI systems can go off the rails,run amok, and hallucinate.
So on CXO Talk 888, Lord Tim Clement Jones and Doctor David A
Bray explain how to anticipate, manage, and prevent these
failures before they escalate into enterprise level crises.

(00:24):
David, tell us about your work. Currently, I am chair of the
accelerator at the Stimson Center, which simply means we
are striving to figure out how we do policy and program in
parallel because the world is changing so fast.
Other than that, I have served in different roles, ranging from
countering bioterrorism to Seattle, the FCC senior National
Intelligence Service executive. Tim, please tell us about your

(00:47):
work. I'm a member of the House of
Lords. I speak there on science,
innovation and technology, particularly AI, and I Co chair
our Cross party group on AI in Parliament and I consult with a
number of organizations on AI policy and regulation.

(01:08):
When we talk about AI, generative AI in particular,
running amok, what exactly does that mean?
One of the interesting things about generative AI is that it
actually generates its own content.
Before then, with different other flavors of AI, we had
things that were rule based systems, we had natural language
processing. But generative AI is actually

(01:31):
able to generate its own content.
And if you look at it, whether it's an image generator or a
large language model, it's sort of doing prediction and large
language models. It's next word prediction and
images. It's trying to respond to the
prompt you gave it and and and as a result, usually they have
these things called random seeds.
They also have temperature controls that introduce a little
bit of randomness. Now, even if you turn down the

(01:53):
temperature control, turn down the seeds and say I don't want
any randomness nowadays with these very large models because
you're doing compute in parallel, there's still some non
determinism because these race conditions that send things to
different processors, you could do the same prompt twice and it
may not actually result in the same outcome.
Plus a lot of these models have what's called observer effects,

(02:15):
which if they observe a prompt has been asked, when you ask the
same prompt a second time, it's already been altered because it
observes that same prompt. So we're dealing with things
that are not deterministic, they're not rule based.
And so as a result, when enterprises roll this out, they
may think it's going to do a certain thing.
But as you mentioned, there's hallucinations.
We may try to get hallucinationsdown.
Let's say we even get it down tosingle digits or 1%.

(02:37):
You'll still never be able to reliably predict when it may
actually do something that's completely not factual.
Again, it's doing sort of multi dimensional pattern matching,
but not necessarily factual. Tim, you're in the House of
Lords and so you're looking at this from a policy and
governance perspective. And so from your standpoint,

(02:58):
what is the impact of this run amok or go off the rails or
hallucination aspect of generative AI?
The impact of a hallucination, when it's fake news, or even
when it's misinformation or other sorts of misbehaviour, if
you like, by the generative AI model can have extraordinary

(03:22):
consequences. You know, for instance, if
somebody. And of course, humans, quite
often linked to a misbehaving AI, are what really matters.
And now we see much more retail use of large language model.
This isn't just about corporate adoption.

(03:43):
This isn't just about a large language model adopted by a big
business going amok. This is individuals using AI
models for their own purposes, which are then in a sense
creating this problem. So, you know, it could be
elections, it could be people sending out misinformation,

(04:06):
causing public, public unrest, all kinds of areas or indeed
discriminatory decisions by an AI system.
So it's the very prevalence of AI as well as well as the
technical problems with it whichcaused the problem as well.
You know, look, people make mistakes, AI makes mistakes, so

(04:27):
what's the big deal? It's when you don't double check
the machine. So for example, we saw for
example, this was early on in the days of ChatGPT 3.5.
There were apparently some, there was one or more lawyers
that basically used it to generate some references for a
court case. They looked realistic because
again, what this is doing is it's doing sort of
multidimensional pattern matching.

(04:49):
They didn't go and check to see if those court cases actually
existed. And then when they got before
the judge, lo and behold, the very court cases that they
supposedly were citing did not actually exist.
And so you need to recognize Jennifer VI will produce things
that look very realistic but could be completely fake.
But the second example I'll giveis there was a competition back
in November where they explicitly told the machine

(05:09):
under no premises, should you ever transfer funds.
And this was made publicly available.
Anybody could ask at a prompt, but you had to pay a little bit
of money. So it was sort of like a self
funded contest. And and by attempt #490 so by
prompt #490 even though the machine had been told do not
transfer the funds, guess what the machine did?
It transferred the funds. And so that's where, again, you

(05:31):
have to recognize these are not rule based engines.
And as a result, what they give you may be unreliable.
Or if you tell it not to do something, you may find
ultimately it does. Yes, and we have our own
examples in the UK of publicly used machine learning tools in
government, you know, having deprived 200,000 people of their

(05:53):
benefit on an illegal basis. And of course, our most
celebrated case in the worst possible way is all our sub
postmasters who were prosecuted by the post Office who were
using it was wasn't quite AI in those days.
In fact, it was an expert system.

(06:15):
And how much worse it would be if it wasn't an expert system, I
tell myself, which caused prosecutions people to commit
suicide and they're still awaiting compensation from our
government. And you know, for the for the
wrongs that were done them for being jailed unlawfully.

(06:36):
So, you know, this trusting faith that people have in
computer based decisions, if youlike, putting it very broadly,
is something that carries through into AI, particularly
when AI has this ability to converse with you and seem
rather human. You know, let's face it, they
passed the Turing test nowadays,you know, almost certainly, I

(07:00):
think that's probably well out of date.
And so it has this kind of humanlook about it which generates
public trust, which makes it extremely dangerous in that says
in that case. So this trust aspect, this kind
of false confidence that leads us to believe that we are

(07:20):
receiving quote UN quote, the truth.
Is that really one of the the core most dangerous aspects,
would you say? If you define trust as the
willingness to be vulnerable to an actions of an actor you
cannot control, we perceive competence, we perceive maybe
integrity, we perceive benevolence in these models.

(07:43):
And as a result we are trusting them when in fact maybe we
shouldn't. And that's partly because of
marketing, that's partly becauseof what we've been told.
But as he said, they appear human and the more they appear
human in their conversations with us and interactions with
us, we don't step back and have the skepticism.
But even before this, there was actually in the 80s where they
actually played like the the bargaining game.

(08:03):
And the machine was pre programmed to make offers.
And so it was never going to deviate from this.
But people still penalize the machine for doing unfair offers
even though it was pre programmed.
And so this is actually more of a reflection of human nature,
which is we try to we we try to treat things as if it was
ourselves when we really need tostep back.
And I often tell people AI is not artificial intelligence,
it's alien interactions. Do not assume at any point in

(08:26):
time is it thinking anything like us.
And as a result, when you are looking at different AI methods,
including generative AI, know their strengths and know their
weaknesses and brief your board,brief your leadership, brief
your government, because those different strengths and
weaknesses may be your comeuppance if you use the AI in
the wrong context. David's so right about that,
Michael. And what is so interesting is we

(08:48):
defer now to these large language models.
Look how polite we are. I mean, David may kick the
computer when it gives the wronganswer, but an awful lot of
other people say, oh, thank you so much.
That's a great answer. And, you know, this polite
interaction with a machine is, you know, quite extraordinary.
But this is what we are now programming ourselves to do,

(09:11):
basically. So again, you know, we are are
building up this kind of false platform really and we should be
much more skeptical. It does take on the character of
a person because it presents theillusion of consciousness, of
sentience. Yes, that's I think the biggest

(09:33):
thing I want to tell people. At no point in time is it
conscious or sentience. This is just fancy
multidimensional mathematics. The trouble is you do have some
people that are, for whatever reason, either talking about it
because it gets clicks or talking about it because it's
feeling they they themselves feel anxiousness.
But you need to recognize it in a day that the current flavors
of AI that we have are really just really advanced

(09:55):
multidimensional mathematics. And so when you deploy it, I
think this does get to the interesting things.
You need to be ready for it to both technically go off the
rails because you may have used AI in the wrong context.
You need to be ready for the fact that maybe the AI is doing
exactly what it it, what the algorithm should be doing, but
you trained it poorly on the data, that the data was non
representative or there were things that were missing from

(10:16):
the data. And as a result, it's going to
reach conclusions that do not match what we want it to match.
Or finally, those two things could be done right.
But what you missed was the human interactions as a result.
And so you begin to see exactly as Lord Tim said, people who may
be going to an AI model asking it for news and and going down a
a increasingly mixture of some real News Plus hallucinated news

(10:42):
or news that's being skewed to the person's attention or
interest. And as a result, they believe
this to be truth without going to other sources to do this sort
of skeptical trust but verify. Absolutely.
And it's very seductive. And then you get this over
dependence that being created. And that's one of the things
that I worry it's the deskillingaspects in the future as well.

(11:03):
Preeti Narayan on LinkedIn says,do you think we'll reach a point
where AI risk officers are as common as chief information
security officers? And if so, should they have a
Direct Line to the CEO or even the board?
That's a that's kind of a pregnant question.

(11:25):
There's a lot there. One of the things that I'm
increasingly hearing from CIOs and Cisos, chief information
security officers both in the private sector and the public
sector, is we all know that in cybersecurity, complexity is
actually what erodes strong cybersecurity and AI.
Guess what? It's one of the most complex
things we're bringing into enterprises, into organizations.
And so yes, in some respects, there is going to be a

(11:47):
convergence between the things you need to do for cybersecurity
as well As for AI risk. And one of them that I would say
that we need to do more of already is move away from what
was called forensic space, sort of like looking for very
specific patterns of something. And instead we need to do what's
called patterns of life. What does normalcy look like in
your organization? What does normalcy look like
with this system? And then if you see something

(12:09):
odd, that could be a hardware, that could be a software
problem, it could be a cybersecurity issue, or
increasingly if it's something that has AI behind it, it could
be the AI doing something it shouldn't do.
And so I actually think there's going to be a convergence
between patterns of life based approaches for cybersecurity as
well As for what's needed to be done to make sure AI does not go
off the rails for companies and for governments.

(12:31):
Absolutely agree with that. Because I don't think in the
future we're going to be able toseparate out that risk role
from, if you like, the cybersecurity role.
After all, we know that AI can be a cybersecurity risk in
itself and therefore the risks of AI start moving into that
area. And I, but I do agree with our

(12:51):
questioner about the, the, the importance of, of assessing the
risk. It needs to be mainstreamed
within a corporate structure andit needs to be very, very close
to the CEO, the audit and risk committee.
It's it should be integral to the way that CEOs that boards

(13:12):
adopt artificial intelligence. And if they don't have that
approach to risk, then they are very vulnerable when they start
adopting it. So, you know, this is very much
part of the structure that I think you and I would propose,
David, for corporates to adopt. And you know, my worry is that,
you know, it's it's adopted to piece meal without that

(13:33):
overarching assessment at the very beginning of what kind of
structures you need to keep the the corporate safe to make sure
that its purpose is being delivered by using these new
tools. It's just a sort of random sort
of idea that is going to improveproductivity.

(13:54):
And we're just going to install a few tools here and there.
I think we have to be much more holistic about the way that it's
delivered. Is there a greater danger from
AI risks than traditional security risks?
Security security is now a boardlevel topic because of the the

(14:17):
potential reputational, financial, legal damage that can
ensue. Where does AI stand from a risk
standpoint and the prominence ofthe the risks?
My assessment of the current flavor of AI, the current
generation we face, it's a lot like insider threat, just like
you need to be worried about insider threat where you don't

(14:39):
know is that human might do something that's maybe not
malicious, but they did something wrong.
They just sent 250,000 names publicly that should not have
been named publicly. That's just an example.
Or maybe they are malicious. Maybe they're actually
disgruntled employee or they're actually getting paid to do
something, or maybe they're getting coerced.
And so you need to treat almost when you bring AI into

(14:59):
enterprise, it actually needs tobe a way to upgrade.
And you'd have the conversation with your board about we're
going to treat AI just like we need to upscale our own insider
threat issues. Because again, this AI, you
know, if haven't, you know, I would not recommend using
DeepSeek. But if you're using DeepSeek,
you should be aware that it's also sending metadata and
information overseas. And are you comfortable with

(15:19):
that? So I would say it, it, it is, it
is a risk. It is a significant risk at the
speed and scale it can do. However, I think we have tools
very similar to how we actually approach insider threats within
companies. The same thing can actually be
done for AI too. And I think the shrewdness of
the question, Michael, is that Idon't think many boards are

(15:40):
there yet. I think that Dave is absolutely
right about what should happen, but I don't think that is
currently happening. There's a great focus on or what
the opportunities are, but it's not really looking about the
risk, how it fits within the risk appetite because I don't
think the risk is necessarily appreciated or the level of risk

(16:02):
is not yet appreciated. So I think boards, I mean, my
worry about boards universally is they don't necessarily have
the skills yet to really assess at that level the kinds of AI
that they should be adopting, how AI will fit with their
values and their objectives and so on.

(16:22):
But you know, this is these are quite big issues across the
board. Let's jump to Twitter and take
some questions there. I've this audience is so smart,
so intelligent. I love these questions.
Here's one from our friend Anthony Scruffignano, who of
course has been a guest many times on CXO Talk.

(16:43):
And he asks this as a new generation is emerging that has
learned to let the machine do the work, how do we get the
workforce to use their excess capacity to think for themselves
and focus on new unmet challenges?
There's two buckets. First is as they are using the

(17:05):
machine and working with the machine as Lord Tim said, and I
agree enthusiastically, still beskeptical.
You know, just cause the machinehas given you an output, you
almost need to sort of double check to work.
And it doesn't mean you have to check everything, but you need
to, you need to gain some expertise.
The second thing is, is we now need to figure out how do we
actually sort of do a combination of of not just one

(17:29):
individual to an AI, but teamingacross multiple people and
machines because it may very well be.
The future is especially for an entry level employee, you may
actually have either as a deputyboss or even as a boss, your
initial boss might actually be giving you tasks through an AI
system because what you're initially doing is the more, you

(17:49):
know, sort of learning entry level work and you have to be
comfortable with that. But then what's your recourse if
you think the the AI is giving awrong task or the task is no
longer fit for purpose because it might do that.
And so I think we actually need to actually begin very early on,
maybe as early as middle school,exploring what does it look like
to do combination hybrid human AI teaming and then figure out

(18:12):
what what should be the actions supposed to be skeptical of
what's coming from the machine. So you triangulate it.
And then two, what do you do if you think whatever the AI is
asking to be done or whatever task is being given to the team
doesn't make sense? When does the human raise a hand
and say, I don't think that's right, I think something's wrong
with the machine? Absolutely agree with David.
It starts really further down the pipeline at school in terms

(18:35):
of the kind of critical thinkingskills that we need.
The trouble is we're starting from here and therefore
corporates, you know, they need to assess the risk of, if you
like, over dependence on these tools where there is no proper
human in the loop where there probably should be, there is an
over dependence on the tools andthere is no real human

(19:00):
intervention which actually could get the same kind of
result. So the skills of individuals
using these tools are going to be absolutely critical.
And you know, we no longer need now to amass information.
We don't need to in a sense, gather the data and, and or

(19:21):
analyse the data in many ways, but drawing conclusions from it
and critically assessing it is going to be absolutely, you
know, right front and centre forthe kind of skills we need in
our businesses. And you know, again, it requires
an understanding at the top level of our businesses as to
what, you know, can usefully be done by humans versus machines.

(19:45):
And I don't think that understanding is quite there
yet. And I think, you know, David,
you do a lot in this area and it's and it's really a question
of bringing that right front andcentre.
I think this idea that we're allgoing to have a great deal more
thinking, I'm sitting there in business and this is going to
allow us to dream up new creative thoughts.

(20:06):
I think might be slightly optimistic, but I think it is
going to be a partnership because you know what they talk
about AI in the legal industry now is the sparring partner.
And I think that's a lovely analogy, basically where you're
testing propositions and you're kicking the machine and saying,
well, what about this, what about that?

(20:27):
Can we do it better this way or that way?
And you, you have somebody that you can argue with.
So I quite like that. I think Lord Tim hit it spot on,
which is there's, there's actually some articles that say
the reward for your AI partner is more work, you know, so it's
not necessarily going to free things up.
I, I love the sparring partner example partly.

(20:48):
Well, so in January of this year, the US administration
pushed out some export controls and they, they rushed them out
the door before there was a change in leadership.
I kind of felt like they were rushed out the door.
And so I actually went to an AI model and GPT and I said,
pretend you want to get around these export controls and you
not only want to get around them, but you want to make
money. And sure enough, guess what the
AI did? It told me five ways to do that.

(21:11):
And so exactly that as we look at different policy things, that
we look at different decisions, we should at least red team
them. Red teaming coming from the
world of what could possibly go wrong or how might they be
misused or abused. I think this is increasingly
where again, but it's not that you turn to the AI and just turn
it all over. It gives you thoughts almost as
its sparring partner, as Lord Tim said, and then you as the

(21:31):
human adjudicate yes, that makessense, or no, that couldn't
possibly happen. This would be an excellent time,
folks. To subscribe to the CXO Talk
newsletter, go to cxotalk.com. Subscribe to our newsletter so
we'll notify you of discussions like this so you can
participate. Do it now.

(21:52):
So here's a really interesting question from Arsalan Khan when
he's talking about the US, but Ithink this is broader.
When the US, he says, removes AIguardrails, is the US population
or the world really ready? Tim, I think this one's tailor
made for you. Although the USI mean individual

(22:13):
states are different. I mean, obviously we've got some
regulation in California and Colorado and some other states.
But by and large, we don't thinkof the states as having federal
legislation on this. Although I see that even the
president has expressed some frustration about that.
And I wonder whether federal legislation isn't coming down

(22:34):
the track. But yes, I mean, I think that
the that that they should be a desire on, if you like the
ordinary U.S. citizen to have standards that are universally
applicable to the kind of large language models that we've been
talking about. And there are those
international standards. And interestingly enough, you
know, US institutions like the National Institute for Standards

(22:57):
and Technology have developed some very sophisticated
standards as well, which could be adopted.
So, you know, if the, the if thewillingness was there to
legislate or regulate, it wouldn't have to be very
complicated. And I think it would provide a
much greater degree of public trust, which is the trade off.

(23:17):
You know, I'm a great believer in having a level of regulation
which doesn't impact on innovation because it creates
public trust. But you know, I think there's
still quite a long way to go. And and, you know, the trouble
is that we risk having differentregulatory regimes across the
world. So my view is that what we need

(23:39):
to try and do is agree international standards, which
mean that we can adopt these tools universally across the
board and then we don't have to sort of think jurisdiction by
jurisdiction in all of this. But if we don't have regulation,
then of course that puts the owners back on business to make
sure that we are kept safe. And we are we can trust the AI

(24:04):
tools that are being used. And you know, the the more, the
less regulation you have, the more responsibility business has
sense. It's not so much that the US has
removed guardrails, it is that they have not enacted anything
at the federal level that actually is a legislative
guardrail. Now that said, just this week on
Tuesday, we did see in a new executive order and some

(24:26):
additional strategies come out on AI.
And if you read them, they actually call for NIST to be
involved. And so they're actually is
movement this way. I think it is interesting though
because it could end up be that the US right now for for various
reasons is going to learn more through executive orders and
through executive policy versus congressional.
The other thing that I also would raise, at least from AUS

(24:46):
perspective, and this is just one way to go about doing this,
is that we do have existing lawsin the healthcare sector,
including HIPAA. We do have existing laws in the
banking sector including the Bank Secrecy Act.
It could be that we actually seeCongress, instead of trying to
do 1 monolithic AI legislation, which might prove difficult for
various reasons, may have the different committees go for an

(25:07):
upgrade. What does AI mean for in the
healthcare space? What does AI mean in the defense
space? What does it mean in the banking
space? And so the US may have a
slightly different approach thanwhat Europe has done or other
countries. But it is the case that, yes, we
have not done anything at the federal level legislative yet.
But as Lord Tim said, stay tuned.
And this is from Kurt Milne, whosays will it take a material

(25:32):
public failure for C level teamsto shift focus from AI
opportunity to also focus on risk?
Sadly, that is probably true. I wish it wasn't.
I wish we didn't have to have, you know, a big data breach for
people to get excited about cybersecurity or, you know, some

(25:53):
of the other crises that have caused corporates to change
their behaviour. We I don't think we've yet had a
major scandal involving AII mean, you know, I suppose it
would be where, you know, a company's marketing material was
entirely created by AI and it turned out to be, you know,

(26:15):
completely eating some creative lunch or something.
I mean, I, I, the moment I can'tquite envisage the kind of
corporate earthquake type of disaster that that would that
would happen, but I, I, that maywell be right.
But I very much hope that the work of people like David

(26:37):
penetrate the corporate world sosuccessfully, Michael, that we
don't have to have of a crisis that causes behaviour to change.
I'm a believer in voluntary corporate governance if that can
be achieved. It's A tag team effort because
what you're doing too, I remember we met in 2017 and you
were chairing the UK strategy for AI.

(26:58):
So you are well ahead of the curve as opposed to something
that I could say more in the US where we're still behind the
curve. But the other thing I would say,
Michael, is there already are some examples where people have
used AI for helping to write code.
Some of the code that the AI suggests compiles and some of it
does not compile at all. And so that should be warning

(27:18):
indications. Do not blindly trust the
machine. But even more interestingly
enough, and I think this was in the last week or so,
unfortunately one of the AI agents that help with writing
code completely deleted a company's code library.
So that should tell you there are things that are warning
clouds on the horizon that if you do not think about a risk

(27:39):
based assessment and you put this in charge of something
that's very tied to your intellectual property or tied to
your finances, bad things can possibly happen.
I should mention the whole area of copyright infringement,
Michael, has become incredibly controversial in Europe.
And you could say that, you know, for reputational purposes,

(28:02):
it's the AI model developers which are at risk of very, very
strong reputational damage if wedon't find a proper solution to
the, you know, training on copyright material and so on.
And then of course, earlier on, David mentioned the lawyers who,
you know, had false citations. I mean, that, you know, I mean,

(28:23):
the legal profession now they'vewised up pretty quickly to the
fact that they need to have closed systems which can be
relied upon, not just asking ChatGPT to give you a, you know,
a legal pleading that is highly dangerous.
So there's already been quite a bit of wising.
Let me just also mention that it's interesting.

(28:47):
If we look at traditional security and data breaches,
there have been massive data breaches.
I think all of us have had our Social Security numbers and
personally identifiable information, you know, spread
out there on the web. There's no escaping it.
And still these breaches happen all the time.

(29:07):
One of our major retailers, Marks and Spencer, you know,
they, they were unable to stock,unable to deliver, I mean, you
know, for months and months. And I think they're still
suffering from the after effects.
So, yeah, I mean, risk management, risk assessment is,
you know, so crucial in all of this.

(29:27):
And, you know, AI needs to be added to all of this.
And our first questioner rightlyraised that it needs to be very
much integrated into the whole risk management process.
Here we have a question from LinkedIn.
It's from Mohamud Gibrel and he says this is to David.
He says. Do you see the need for new

(29:50):
agent management tools and platforms, For example, managing
agentic AI sprawl with security,compliance, cost and so forth?
Short answer is yes. The slightly longer one is it
may not look like some of the tools of the past because this
is not steady state. Because we just talked about how

(30:11):
generative AI and other AI approaches are, are possibly non
deterministic and that they are generating their own content.
If you do what has been done in the past with forensics and you
assume steady state, you will always be behind.
And so that's where I'm really excited about sort of patterns
of life where CIOs and chief information security officers
can go to their boards and say, look, we can actually get a, a

(30:33):
win win where we upgrade the cybersecurity posture of our
organization while simultaneously also getting it
better, in better shape for managing AI risk and actually
detecting AI risk. Because we're looking at what
the normal patterns of life are for our information systems and
our AI systems, and then if we see something odd, we're
stopping it and then doing a pause and trying to figure out
what happened here. So I think that's necessary.

(30:55):
The other thing I would also give a nod to is for certain
flavors of AI, especially the ones that are precompute.
I triple E after about five years, released just about a
month ago called the Spatial WebProtocols.
And just like how everything on the web that we use HTTP to find
things on the web and we use HTML to actually describe a web
page. Now it's actually you can
actually use what's called HSTP,which actually allows you to

(31:19):
basically address anything in space and time almost if it was
like a web domain or web URL. And you can use HSML to actually
write things up and bound thingsby space and time.
And So what does that mean for AI?
We can actually write descriptions and say in general,
autonomous AI systems that are flying plane should not plow
into the ground and general carsshould not actually like drive

(31:42):
into other cars. You can actually begin to have
space and time constraints. And even more interestingly,
there's there's some inner earlyindications that you can
actually take that and actually use it for policy as well,
because policy often has boundaries.
JPL Jet Propulsion Lab with NASAhas been using this as
experiments on NASA on the moon to actually because of the lag
effect, giving these sort of like the commander's intent to

(32:03):
the possible moon Rover or whatever is on the moon, but
then allowing the AI that's on board actually on something that
would be on the moon try to navigate within the space-time
dimensions. And so this gives me hope that
for some of the future of what we do with AI, especially if it
is precompute, we can also boundit by space and time
constraints. The other element of this is

(32:25):
absolutely right at the corporate level and the adoption
level at by business. But of course, you know, agentic
AI, some of it's snake oil, someof it not has that autonomy, but
it also is now becoming retail as well.
So, you know, again, we have to make sure that, you know, if a
business has 1000 employees, they're not all thinking, yeah,

(32:47):
I'm going to adopt this agentic AI.
I'm going to put all these toolstogether in it in a semi
autonomous way because that is going to be a threat to the
business as well. It's not just cyber security or
data risk. It's, you know, decision risk.
There are so many aspects to this.
And so it double s the, the, the, the risk as far as I'm

(33:10):
concerned, where you have that degree of autonomy in an AI
agent that you know, could unravel an awful lot of things
further down the line. So, you know, again, that means
that boards and senior executives need to be very, very
mindful. Arsalan Khan comes back and he
says this is really intriguing. He says why can't we have AI

(33:34):
write its own policies? What are the pros and cons of
the AI deciding the policies andimplementing the policies?
There's no reason why you shouldn't, as long as you've
trained the AI in the right kindof a way.
You can be a monitor. You can, you know, I think you
have to have some sort of generative adversarial kind of

(33:56):
approach to it. You can't have, I don't think
the same, you know, AI should belooking at itself, but I think
what you need is 1 AI looking atanother and that is going to
become extremely prevalent in myview, where you have a mixture
of tools, one is monitoring another one is risk assessment,
seeing another. You know, after all, we know how
quite a lot of work has been created.

(34:18):
I remember when there was a, a painting in Paris that was then
sold in created in Paris, sold in New York for half $1,000,000
called I think it was some Eduardo de Bono.
And it was created by this generative adversarial Network,
and it was created by one AI criticizing the work of another

(34:40):
until eventually it produced this rather splendid portrait.
Well, you know, let's take that forward a few years and say, how
do we make sure that A, we create the policy and then make
sure that it's complied with. So, yeah, I mean, I think the
chief compliance officer has already got tools of that kind

(35:02):
in the box. This is from Chris Peterson, who
is maybe taking a slightly contrarian view here.
And Chris Peterson says, is there a balance we can strike
between AI regulation and AI innovation?
And here's the point, especiallygiven the velocity of chains and

(35:22):
the billions and trillions of dollars riding on the success of
big AI businesses. In other words, sure, guys, we
have to manage risks, but let's also not throw out the magic as
we manage ourselves risk into the ground.
Thoughts on that? If you use whether it's human

(35:45):
produced or you use another AI to produce constraints for what
an AI should and shouldn't do, then you allow innovation to
happen within those constraints.And so if you have a separate
system that is monitoring that harm is not coming either
mentally or physically to a human being, then yes, that
gives you an additional degree of of of confidence that you
wouldn't have if you had just the AI system by itself.

(36:07):
You may tell the AI system don'tharm humans, don't harm them
mentally or physically. But as we we know,
unfortunately, whether a wrong prompt is asked or the data was
incomplete or the non determinism, sometimes things
can happen. So I think when I, when I brief
companies and boards and even governments, I say, you know,
what you really should be doing is not necessarily setting five
year goals, but setting five year visions for the how you

(36:30):
want the technology and society or technology in your company to
perform and then use constraintsas your friend.
And so just as what turns him was saying that you could use AI
to actually sort of bound in other AIS.
There are other approaches too. And we've seen already, I've
already seen there's some certain companies that are now
using AI to help you coach to dobetter prompts to an AI.

(36:52):
And so that's already a proof point that this is possible.
So I think at the end of the day, you definitely can have
both. And given that, and as Lord Tim
said, you know, regulation will always be playing catch up with
the technology by using constraints as opposed to
setting very fixed things, whichactually might be either out of
date or or or or restrict innovation.
If you do constraints, that allows you to still protect the

(37:15):
things that need to be protected, but allow people to
explore that space as well. I've never believed that good
regulation is the enemy of innovation.
I've always thought of it. It's actually the friend because
we want the right kind of innovation that benefits humans,
that doesn't substitute machinesfor everything that we do that
is for our benefit and doesn't create some of the harms that

(37:39):
we've been talking about today. And I think the way you do that
is the way you set the principles out of the kind of AI
that we want to see. And then we try and adopt
standards, mandate particular standards.
And we've got to be agile in theway that we do that.
And the great thing about standards is they can change
over time. You know, and we talked about

(38:00):
NIST earlier, but there are manyother organizations, IEEE, which
David also mentioned earlier, ISO, the international standards
organization. And all of them are trying to
move towards interoperability internationally.
And for me, that is I think the the the goal, if we can get that
and we can mandate that that sort of quality into large

(38:24):
language models, into other forms of AI, then I think we've
made some real progress. But I think the idea is that the
idea that no regulation means wecan just have a complete free
for all and produce any kind of innovation, whether it's good or
bad for humanity, I think is what we need to push up against.

(38:45):
Is there a distinction between this balance of a of regulation
versus innovation when it comes to generative AI versus any
other new technologies? Tim, any any thoughts on that?
I always use the example of the automobile.
I mean you look back at the automobile, what created the

(39:05):
growth? I mean, it was the rules of the
road. It was manufacturing standards,
you know, and I I mean, and and it was 10 years when you look at
5th Ave. photographs, nineteen O3 to 1913, that came in pretty
fast. You know, it was a horse and
carriage in 19 O3 automobiles except for one horse and

(39:26):
carriage in 1913 going up 5th Ave.
So we've been here before and wedidn't just allow, you know, the
automobile to run everybody downon the roads.
Of course we started with a red flag which was probably, you
know, may be safe in the circumstances.
But very soon we got sensible regulation and look how

(39:48):
innovative has the automobile industry been over the last 100
years? Enormously.
So I'm actually optimistic that we can get the right form of
regulation without stifling progress and innovation.
This is from Chris Davidson and David He says on LinkedIn.
He says most of the conversationaround AI centers on innovation

(40:11):
from big companies, Open AI, Anthropic, Microsoft, so on.
As an equity partner in a new AIintegration startup, he's
curious what you have to what you think are the biggest
opportunities for the little guyin the marketplace.
In other words, what are the areas of value on which smaller

(40:32):
companies can stake a claim withminimal risk that they will get
crushed by these big players in the next three to six months?
So I'll give 3 real quick. One would be while these big
players are focusing on generative AI and generative AI
is actually using algorithms that, you know, deep neural
networks that goes back to late 80s, early 90s.

(40:53):
So it's a technology only possible today, but it's not
necessarily new. I would say look at additional
approaches that are coming out work of Carl Frisson on active
inference. I mentioned that, that that's
actually something that can be pre compute and be bounded.
And I think the future is not going to be 1 AI method to rule
them all. It's going to be mixed model.
I mean, we know that's actually what what happens with us is

(41:15):
that you use different things. And we may find that generative
AI is really great for replacingnational language processing.
But as you see with the papers are coming out, these models are
not reasoning. They're only doing pattern
matching and into the degree that pattern matching and
reasoning matches are doing that.
So first is there are other AI approaches that are not getting
anywhere near the oxygen that they need, partly because
generative AI has, has has consumed so much.

(41:37):
The second thing, so is we need AI at the edge, including
generative AI at the edge. And, and so while these
companies are selling you sort of the, the monolithic platform,
you got to subscribe to their services.
I want something that I can run on my phone.
I want something that I can run on my laptop without it calling
back. And so it's got to be able to
operate in low, low, low bandwidth environments and low

(41:58):
processor environments. And you've seen, you know,
DeepSeek, that was example of distilling a model.
I'm not necessarily saying use DeepSeek, but you also see that
both Berkeley and Stanford took ChatGPT 3.5 and they went to
Llama, the open weight model andand found ways to actually ask
ChatGPT how to upgrade Llama to be almost as good as ChatGPT

(42:19):
3.5. So we need more innovation about
AI that can run at the edge in disconnected environments or or
low processing environments. Because at the end of the day,
some people may not want to connect back to the cloud.
And then finally, this is actually more the human
dimension that right now, some of these companies, companies,
I, I really try to have empathy for the fact that there's some
companies out there saying like it's going to displace 50% of

(42:40):
jobs. Well, actually, I don't think
that's the case. I think while jobs may be lost,
people will retrain. And so it's actually an either
or. And So what I want to see is AI
companies and, and AI startups that are actually helping people
navigate the chasm between you used to do this, now you're
going to be doing this with AI. How do I accelerate your
learning journey so that you're,you're, yes, your job has been

(43:02):
let go, but you're now doing a different type of job and you're
actually much more productive. And so navigating that in a way
that gives people agency as wellas reducing anxiety.
I think there's going to be a whole industry there.
And I think right now anyone who's just saying it's just
going to be a loss of all jobs period without saying.
And at the same time, new jobs will be created.
I'm really interested in the newjobs that will be created and

(43:24):
how we actually have private industry help people skill up
for that. Lord Tim, thoughts on very
quickly on this issue of opportunity.
How can small players survive? Yes, very quickly, I thought
David's point was very shrewd. My point would be actually data
is still going to be king in allof this.
The large language models are a commodity.

(43:46):
I don't think they've made any money from the large language
models. So my view would be I would go
to open source or distill in theway that David discussed and
find a proprietary database thatI had an exclusive license to
which had a particular use maybein healthcare or education or

(44:07):
agriculture, something like that.
And I was the only AI developer who had the access to that data
and was able to use my AI model.And I think then I would, and I
would get an open source model to help help me with all that.
And I think then I would be in business.
But you know, I'm not an AI developer.
What do I know? We need to spend a few minutes

(44:31):
before we're done here talking about jobs and the impact of AI,
generative AI on jobs. David, you want to take some
crack at this first? When I met Lord Tim back in
2017, what really resonated at the time with the UK government,
they were proposing data trust. Here in the United States, we

(44:52):
call them data cooperatives. But this idea that businesses or
people could come together, likemaybe we're all musicians or
we're all artists and we actually say we are willing to
let our data be used for the following purposes.
Well, here we are now in in 2025, looking at 2026.
We need that in the United States.
I mean, because I actually say that right now we have a few
companies that seem to be repeating the lessons of Napster

(45:15):
where they have acquired data, but not necessarily respected
intellectual property, not necessarily respected the equity
of the people that produced it. And it also makes it harder
whether or not I can trust it, because when the machine gives
me an answer, where did it come from?
So I think this does put the premium on exactly what Laura
Tim said, which is the future ofjobs is whether it's a
interesting data set or it's people that are musicians,

(45:36):
artists, entertainers, or maybe it's also we just care about
finding a cure for Parkinson's. On top of it, though, we no
longer have to shift the data. One, that's a cybersecurity
risk, but then two, what's called Federated learning, where
the algorithms actually come andlearn in situ on the data
itself. And we can actually monitor
what's being learned. And then when it leaves, we
actually record that. And then there's some either
financial exchange of value or some non financial, some other

(45:58):
benefit that it's equity and whatever cure for Parkinson's or
whatever research was done. I think the future of jobs is
both rethinking data so that we can actually make sure people
have some sense of stakeholderism in their data.
And then again, it's it's it's it's, it's, it's getting out
there and saying this whole narrative of it's going to kill
jobs period, as opposed to it's going to displace jobs.

(46:20):
Much like how when the automobile showed up, well, yes,
horse drawn characters and thosesort of things, they went away.
But then there were taxicab drivers or truck drivers or
people that maintain cars. New jobs were created.
And so the faster we can actually help people be aware
that new jobs will be created and then help them with that
journey, I think will actually will actually reduce some of the

(46:42):
anxiety that's present in societies at the moment.
I take that data communities aspect extremely seriously.
I think valuing our data and making sure that collectively in
the way that David describes it,we can make use of that and
value it and monetize it is going to be a very important.

(47:02):
I chair an author's licensing collecting society, and I can
see the equivalent happening in all kinds of areas, not just the
creative industries, but, you know, farmers collectively, for
instance, with their data and being able to use that and in a
sense parlay, you know, license it, make make make it valuable

(47:23):
for, for, for use by AI developers.
So I think entirely right. The second thing I would say is
that I think that human empathy will still be a requirement.
I think those human creative qualities are still going to be
important in the future. So, but and then of course what
we've already mentioned is that critical thinking that's needed

(47:47):
because if we, if we don't thinkcritically, then frankly AI is
definitely going to take over and we're not going to have
really much of A function as particularly if AGI comes along
anytime soon. Do you have thoughts on what
kinds of jobs will get displacedby AI?
When what? What kinds of jobs will not and

(48:09):
and very quickly please? If you can stick into Claude or
ChatGPT, you know, tell me, giveme a business plan for the next
5 years, you know, and this is my vision and this is these,
these are the annual reports of my competitors.
Then, you know, I think you're and that's exactly what you can
do nowadays. Then you're going to find that

(48:30):
kind of job going. So I think this is a white
collar revolution basically. And I don't think we've been
here before in anything like thesame way.
And it's not a blue collar revolution that David was
talking about drivers and so on.I mean, you know, we have a gig
economy, but that doesn't and they're driven by algorithms in
terms of their performance. But that doesn't mean to say

(48:52):
that those jobs are going. People are still going to be
riding around on scooters and motorbikes in the future.
We're going to be the ones, the professionals are the ones who
are at risk. David literally in one sentence,
what jobs will leave and what jobs will remain just being
displaced or not? So much like the printing press

(49:13):
where scribes went out of work but typesetters arrived.
If you were in the business of web development, interface
development, that'll increasingly be done by AI.
Anything that that the past is like the present, that'll be
done by AI. The new jobs will be the jobs
that are novel. Where the past is, you know,
past does not inform the presence, where you have to
think on your feet, where each day is different.

(49:35):
And the people that are then when they look at what comes out
of the AI are being those skeptics, being those critical
analysis. Arsalan Khan says something
interesting. Whoever defines ethical
boundaries defines the directionof AI.
If the world doesn't have consistent ethical boundaries,
then AI is just a reflection of our own biases.

(49:57):
Who wants to just very quickly take a crack?
That's an interesting statement,but let's think back to the
1700s, Arsalan. There were plenty of things that
people thought in the 1700s thatare ethical that we would not
think are ethical now. Same thing in 1800s I often say
to my British counterparts in War One, the British thought
that queue boats were ethical and submarines were not ethical

(50:17):
and switched to World War 2 thathad flipped.
Ethicals are socially and temporally defined.
What I think he's saying though is what are the principles we
want to hold true to? And I think that is an important
conversation to have. But recognize that ethics, you
know, there are plenty of thingswe thought 200 years ago were
ethical that are not ethical now.
Let's make sure that as we look at ethics now and we spend time

(50:38):
on it, maybe what we really be should be saying is what are the
principles and the constraints that we care about as opposed to
just ethics? There are principles that have
been agreed internationally. They may not be agreed in terms
of regulation, but the OECD principles, China signed up to
those, you know, G7G20. So I, I think, I think I'd be

(50:59):
more optimistic than Ashland. Looks like Wei Wang from
LinkedIn is going to get the last word here and I'll just ask
you each for the one quick sounds sound bite sentence.
She says this when there is a gap in popular perception and
the technical reality of implementing reliable reliant AI

(51:19):
solutions. What advice do you have to for
teaching and explaining clearly the misconceptions that and to
build the rigor into under to helping convey the complexity?
So it sounds like the individualwas trying to make the case to

(51:39):
either like a board or C-Suite or whatever.
And I would say bring examples because there are plenty of
examples, but also let you know.And I wouldn't necessarily do
this for the board because they may not have the bandwidth, but
for your employees and your staff, let them play with the
technology and see when it worksand when it's fallible, because
then it actually makes it much more real and it's, it's
visible. However, I think this is a case

(52:00):
where, again, whether it's case studies or experiential itself,
you almost have to sort of show and make it real because
otherwise people will not see the value.
And again, that that that fact that we see the machine and we
think it must be like us becauseit's talking to us or it's
giving us text. We need to recognise that our
own human emotions will run the risk of missing the fact that we
have to be sceptical of these alien interactions.

(52:22):
We are going to be the victims of our own trust if we're not
careful. And you know, I, I'm a great
believer in encouraging a culture of curiosity, but I
think we have to encourage a culture of caution as well.
I think we just have to get the balance right between the two.
And if we can encourage that, and it's not just business, it's
government, it's families, you know, and where way behind the

(52:46):
curve, unfortunately, because technology is moving so fast.
Well, with that, we're out of time and clearly there's a lot
more to discuss, so I hope you'll both come back and be
guests on CXO Talk in the futureso we can continue this
conversation. Thank you very much, Michael.
It was great to be here. Truly appreciate it, Michael.

(53:09):
And a huge thank you to Lord TimClement Jones and Doctor David A
Bray, and to the amazing audience who asked such
excellent questions. Thank you all.
We have incredible shows coming up.
Check out cxotalk.com, subscribeto the newsletter, and we'll see
you again next time. Take care, everybody.
Advertise With Us

Popular Podcasts

Stuff You Should Know
Dateline NBC

Dateline NBC

Current and classic episodes, featuring compelling true-crime mysteries, powerful documentaries and in-depth investigations. Follow now to get the latest episodes of Dateline NBC completely free, or subscribe to Dateline Premium for ad-free listening and exclusive bonus content: DatelinePremium.com

New Heights with Jason & Travis Kelce

New Heights with Jason & Travis Kelce

Football’s funniest family duo — Jason Kelce of the Philadelphia Eagles and Travis Kelce of the Kansas City Chiefs — team up to provide next-level access to life in the league as it unfolds. The two brothers and Super Bowl champions drop weekly insights about the weekly slate of games and share their INSIDE perspectives on trending NFL news and sports headlines. They also endlessly rag on each other as brothers do, chat the latest in pop culture and welcome some very popular and well-known friends to chat with them. Check out new episodes every Wednesday. Follow New Heights on the Wondery App, YouTube or wherever you get your podcasts. You can listen to new episodes early and ad-free, and get exclusive content on Wondery+. Join Wondery+ in the Wondery App, Apple Podcasts or Spotify. And join our new membership for a unique fan experience by going to the New Heights YouTube channel now!

Music, radio and podcasts, all free. Listen online or download the iHeart App.

Connect

© 2025 iHeartMedia, Inc.