Episode Transcript
Available transcripts are automatically generated. Complete accuracy is not guaranteed.
(00:00):
Welcome to CXO Talk episode 875.Today we're honored to host 2
distinguished members of the UK House of Lords, Lord Chris
Holmes and Lord Tim Clement Jones.
We'll explore how policy makers at the highest level balance
innovation, regulation, and public trust and technology,
(00:24):
specifically related to AI, digital assets, and open data.
Let's get into it, gentlemen. Welcome to CXO talk.
It's great to see you both. Hi, Michael, be here.
Lord Holmes, tell us, why do we care about this topic of AI
regulation and governance in this way and and especially at
(00:47):
this time? They're incredibly powerful
technologies, but they're in ourhuman hands.
And if we're going to optimize their potential, what's being
cognizant of their challenges and their risks, It seems
entirely logical as we have donethrough centuries with other
advancements and technologies that we consider the right
(01:09):
regulatory framework. So we balance all of the
competing needs and we put in what I would describe as right
sized regulation, right sized regulation, good for innovation,
good for investor, good for citizen, good for creative, good
for consumer. Tim, why is all of this so
(01:29):
beneficial and how do we avoid interfering, presenting
obstacles to innovation, as Chris just said?
Both Chris and I believe that actually regulation can help
innovation. And if you basically get it
right in terms of clarity, clarity, consistency and
(01:50):
certainty for business, for consumers, for developers, that
will actually lead to a much better and safer form of
innovation at the end of the day.
And that's what we want and we want to see it adoption.
We want to see roll out of new technologies, but they've got to
(02:11):
be for our benefit. And you know, as you and I have
discussed over the years, Michael, this technology has
been advancing at an incredibly fast rate.
You know, back in November 22, for instance, suddenly ChatGPT
burst on the world and people have been struggling to catch up
(02:32):
ever since. And now, of course, this year
people are talking about agenticAI, which is, you know, much
more autonomous. So you've got this mixture of
large language models and autonomy.
And so you've absolutely got to have some kind of guardrails
around that. Chris, when we talk about the
the guardrails and the the necessity for protection, how do
(02:58):
you implement that type of governance, but at the same time
ensure that there is there are no obstacles or or minimal
obstacles to innovation Going back to your initial comment.
We know what we need to know to make a success of this.
All of history suggests to me that we know how to do right
(03:19):
size regulation. We all know bad regulation.
But that's bad regulation. That doesn't mean that as a
consequence regulation is bad. I think if we take a principles
based, outcomes focused and inputs understood approach, we
give ourselves the best chance of making a success of this and
(03:43):
to those principles. Trust and transparency,
inclusion and innovation, interoperability and
international perspective, accountability, assurance and
accessibility. Good principles I would suggest
for AI regulation. Good principles, all regulation.
And in the UK, the previous government in its white paper
(04:07):
set out principles but set them out on a voluntary basis.
But they're good principles. But if they're good principles,
it seems logical that one would want them on a statutory basis
to give that opportunity for whichever part of society,
whichever part of the public or private sector, whichever part
(04:28):
of the economy you find yourselfin, you're likely to experience
the same approach to AI, the same guardrails around AI.
If it was Tim said. Whether you're an individual,
whether you're a consumer, whether you're a business,
whether you're an innovator, what ultimately you want,
(04:51):
certainty, clarity, consistency.Because from those three CS
comes what you really need as a human to enable action
confidence. How do you get there?
But I think by putting it in in place, a regulatory architecture
which is rooted in those principles, crucially is
(05:12):
horizontally focused. So it's cross sector, it goes
right across the economy. By doing that, you're able to
identify areas where there is minimal or indeed no regulatory
cover or regulator in the UK. Recruitment being an obvious
example to bring to bear on that.
And if you if you have that framework, it has the principles
(05:36):
at its core, but it has the agility and the dynamism to
develop, particularly in our common law jurisdiction in the
United Kingdom. It can develop, it can be agile.
And for a comparator, if one looks at other more prescriptive
jurisdictions, there's no suggestion necessarily that
prescription is wrong of itself,but it is necessarily different.
(06:02):
And prescription necessarily tends towards trying to capture
every last element in that legislation, which always has a
danger of trapping that legislation that point in time
or ending up being over controlling through that over
prescription. So I think by having agility,
(06:24):
flexibility, it's entirely possible.
It's what all governments, it's what all regulators should be
seeking to hold at the same time, the needs of the citizen,
the needs of the innovator, the needs of the investor.
It makes it more complex, but that is completely what needs to
be if you're going to have this right size regulatory approach.
(06:45):
I think any form of regulation in the world of AI does have to
be risk based. So and that gets you to a kind
of proportionate approach to regulation.
And the other point and Chris used the word interoperability
earlier and that I think in EU KS context particularly, but I
think also worldwide, if we're going to see AI applications
(07:08):
thrive across the world, not just sitting one jurisdiction or
another is the adoption of international standards.
And I think that is really the big challenge that we've got.
So whatever form of regulation in any jurisdiction, because the
US is going to be different fromEurope, is going to be different
from the UK. What we need to do is make sure
(07:31):
that developers and businesses and adopters and so on aren't
trammelled by regulation too much.
They really can understand what standards they're meant to
adhere to, whether they're of safety or ethics or you know,
how however you might describe it.
And those are the standards which are currently in the
offing and are being developed. And I think that's one of the
(07:52):
most positive aspects at the moment.
I think that's right, Tim. And building on that standards
point, that sense of bringing the world together and having
that international, collaborative, relational
approach has to be right anytime, but particularly at the
moment, it would seem extraordinarily pressing because
(08:15):
to go to the US, people would bemistaken.
But it's easy to be mistaken to think in the AI space, there's
no legislation, there's no regulation.
But when you go into more detailand you see what's happening at
a state level, different legislative and regulatory
instruments being brought to bear, it's far more interesting
(08:36):
and far more complex than that. And it demonstrates again a need
to bring some interoperable approach to it.
They'll need to be at some stagesome federal activity in the
space for sure. For that.
I'm really keen and brought intothe all of the legislation I've
(08:57):
done in this space, this sense of really looking at that
international piece and it and the standard is a key part of
it. But also, we could potentially
do nothing short of potentially reinvigorate so many of those
international organizations set up in the wake of the horrors of
the Second World War. And if we could get a form of
(09:21):
engagements and approach from all of those international
organizations, that could be extraordinary powerful for them
and indeed for the entire planet.
I just want to tell everybody that right now you can ask your
questions on Twitter using the hashtag CXO Talk.
And if you're watching on LinkedIn, just pop your
(09:42):
questions into the LinkedIn chat.
I urge you to ask questions because when else can you ask to
members of the House of Lords pretty much whatever you want on
these topics. So ask your questions.
Now. You've both been talking about
international collaboration. Seems like a a very apartment
(10:07):
discussion today with what's going on.
So what are your thoughts in this environment where we have
tariffs, where the world economyis being upended because
collaboration doesn't seem to behappening.
So what's the impact there on AIand and the the development of
(10:34):
these tools that we all want to develop properly?
People, you know, like us who work in the digital world find
it very ironic that all the debate coming from the States is
about the, the world of physicalgoods, the trade in physical
goods, the deficit in physical goods from the states in terms
(10:56):
of, you know, the states feelingit has to impose tariffs on
those imports. Whereas actually the, the United
States now is, you know, the world's most powerful country in
terms of digital services. And, you know, the big tech
companies, Microsoft, Google, Amazon, they are immensely
(11:18):
powerful. And I I think that is the
countervailing power in the economy of the US, which is not
really being taken account of. And it was what was the digital
services tax was going to try toredress.
But of course, now that he's up for grabs and, you know, there
could well be some kind of tradeoff from what we see in the
(11:41):
media about what our government's up to.
And it may be what Europe may have to trade off to some
extent. But it is ironic because
actually you should really take the two things together and say,
well, well, look, you know, we need, if we're going to come to
agreement, we have to take account of the fact that the US
(12:01):
companies, big tech companies are hugely dominant.
It seems extraordinary analogue in a digital world or
extraordinarily automotive in analgorithmic world.
The automotive industry is obviously incredibly important
and significant worldwide, but to have something which looks at
(12:25):
this through a lens doesn't lookso much on services, on the
digital economy, if you will. And also I was under the
impression that quite a number of centuries ago we'd resolved
this issue of international trade.
(12:45):
I very much remember reading Adam Smith.
It's only makes sense to have international trade and all of
the economic and indeed social benefits that flow from that.
Of course there are questions around dumping, there are
questions around other forms of protectionism.
(13:07):
But as a cardinal principle, largely, we can win as a
connected relating human societyif we try and establish and
enable that route to free trade,as opposed to going an
alternative route where we're only beginning to start to see
(13:28):
the more than significant consequences.
Tim doesn't seem to me like right at the moment, the idea of
a connected world society is among the primary goals, let's
just say, at least in the US. No, that's very true.
In fact, it's almost, it's a disrupted society.
And and it seems that there is some goal to actually disrupt.
(13:54):
Now, you know, we're told that actually this is going to lead
to, you know, and certainly U.S.citizens are being told this is
going to lead to a powerful, even more powerful U.S. economy,
higher living standards and so on.
But you know, I've been brought up in the world and Chris has
said so himself just now. You know, we've lived in a world
(14:15):
where free trade was seen as something to be desired and the
more we could encourage that andenable that, the better.
And indeed, the whole digital services area has been built on
that. The Internet has been built on
the essence of free trade effectively only mitigated in
terms of online safety. A whole of digital markets,
(14:40):
competition, antitrust policy has been predicated on that.
But you only interfere with digital services where there's
clearly an abuse of a dominant position, for instance.
So you know the sense are we having to rewire our brains in
order to adjust to all this? I don't think it's going to
(15:00):
change our attitude to thinking that it's much more desirable to
have an open economy, both digital and analog, But, you
know, we may have to adjust to anew reality.
Now would be an excellent time to subscribe to the CXO Talk
newsletter, just go to CXO Talk.But we we have a a number of
(15:23):
questions that are coming in. Why don't we jump to some
audience questions right now? And this is from this first
question is from Arsalan Khan onTwitter.
And he says going back to the discussion earlier about the
guardrails and risks, he said who he says whoever makes the
(15:45):
guardrails becomes the gatekeeper.
How do you ensure that there areno unintended biases that creep
in as you're undertaking regulatory efforts?
That's the same truth when one is regulating or legislating for
anything, and it's A at the right point to raise.
(16:10):
The reality is, if you understand and are always
conscious of the values, the social, the economic context in
which you're seeking to bring about regulations and
guardrails, and crucially, you have meaningful, sustained
(16:31):
public engagement, that gives you the best opportunity of not
just bringing to bear that rightsize regulation, but right size
regulation which is really rooting that social and economic
context and thus has the abilityto thrive as it develops over
time. If you like legislators and
(16:52):
governments and so on, that's their responsibility to set the
rules for the regulators. So it's not as if these guys are
sort of freestanding. You know, they're there to put
into practice the principles basically, and they should be
regular oversight of them. I mean, one of our complaints in
the UK is that we're not active enough as legislators in
(17:13):
overseeing how our regulators are doing.
And you know, as a result, if you're not careful, you get
accused of bureaucracy, red tape, regulators blocking and so
on. Whereas actually, for the most
part, whether it's protecting consumers or farmers or those
who are subject to the, the, theweight of technology, if you
(17:36):
like it's, it's a, it's meant tobe beneficial and it's perfectly
possible to make sure that it delivers.
And this is from Anthony Scribignano on Twitter.
He's been a guest on CXO Talk a number of times.
And I, I believe Tim, you know, Anthony, and Anthony says this,
he says, please share thoughts on the ongoing difficulty of
(18:00):
protecting intellectual propertyacross borders.
That is in the category of innovations and artificial
intelligence, he says. As an inventor and innovator, it
seems antithetical to him that it is so difficult to protect.
We could talk about AI, particularly an intellectual
(18:21):
property till the cows come homeand especially given the
difficulty of intellectual property copyright, for
instance, used for training large language models being used
in different jurisdictions, but being subject to different forms
of copyright protection. I mean, the state's actually,
it's it's very, very interestingcase where you've got the need
(18:44):
to register copyright and yet there is the fair use exemption,
but that is being tested. And actually for in respect of
people like yourself creators, I'm actually quite optimistic
that in different jurisdictions,whether it's Northern California
or Delaware or wherever it may be, judges are beginning to come
(19:06):
to the conclusion that fair use does not cover the use of
copyright content for training purposes.
And in the UK we have our own debate taking place where our
government is proposing to alignitself with the EU and that is
going to be causing difficultiesbecause they're proposal is to
(19:27):
have an exception for the training of large language
models that requires creators toopt out and say, sorry, I don't
want you to use my material whenit's going to be incredibly
difficult both technologically and in practical terms for
creators to opt out. So it is a really live issue.
(19:48):
And the big question is, you know, should those large
language developers have a free pass when, you know, creative
rights are still very, very important, whether it's authors
or musicians or visual artists or film makers?
It seems extraordinary that we're in a an era now just by
(20:11):
dint of having a new technology that that then necessitates
tearing up centuries of well understood IP and copyright
jurisprudence. Absolutely extraordinary.
As I said at the outset, principles based outcomes
focused, inputs understood and to those inputs, inputs
(20:34):
understood, respected, remunerated where appropriate
and consented. And we're a long way from that
in any jurisdiction. I think the Supreme Court
decision was extraordinary really to come to that
conclusion. As Tim says, there's more hope
in some of the state and judicial decisions and others
(20:56):
coming down the track in the UK.It's just as live an issue.
I published a report just at thebeginning of March to try and
bring more focus on to a lot of these areas about where AI is
currently impacting people's lives and thus the need for
cross sector cross cutting AI legislation.
(21:18):
I drew out eight realities, 8 billion reasons to regulate, and
those realities were people at the sharp end of this.
And one of those archetypes was indeed the creative who finds
herself or himself with their work being taken with no
consent, no respect and no remuneration.
It's why in the UKA bunch of ourgreatest artists, music artists,
(21:44):
released an album and it was an album of silence to make that
very point. This is where we're heading.
If we except, which I don't believe for one instant we
should, I don't believe one instant we need to, that this
work can just be taken, then that's what we'll be left with.
Silence. Where otherwise we'd have those
sweet sounds that musicians bring us, blank canvases, a want
(22:08):
of artists bringing this stuff to bear.
And then the additional kicker on this, it is where you have AI
generated content competing withthose artists as well.
So we need to be conscious of that, you know, double impact on
our creative community who we should be rightly standing up
for their IP, their copyright, their rights.
(22:32):
Tim Oliver P on LinkedIn raises a similar issue.
He says there is not just a needfor guardrails for AI regulation
is definitely required. However, the impact of
regulation hits many other areas.
He gives the example of AI imagemanipulation and he says AI in
(22:53):
financial services is another key area.
And here's the crux of his question, Tim.
He says how will regulation helpde risk AI LED decision making
for customers. He's saying customers, but I
think that's really for for all of us in general.
Chris and I are both fans of putting forward private members
(23:16):
bills, and Chris has had a terrific private members bill on
AI regulation in general. I've taken a, a narrower
approach, which is about taking decisions by automated algorithm
in the public sector. And that is also a very big
(23:37):
issue, issue in the private sector.
Now the, the increasing use of AI models in the private sector
to take automated decisions is going to be a, a bigger and
bigger issue as time goes on. And, you know, we have to have
a, a risk assessment, an impact assessment to start with.
(23:58):
We have to have transparency about the use of these models.
We have to have the ability of, you know, the citizen or the
consumer to be able to make a complaint, to understand what
the decision has been made aboutthem, then make a complaint and
get redress. So there's a whole series of
steps which, you know, most governments haven't yet really
(24:19):
thought about. But, you know, we can't have a
situation where decisions are being made by machine, where the
citizen, the human doesn't really have any agency at all.
And I think many of us are worried that governments, in
their enthusiasm, you know, to get a bit of red tape, to be
more productive and so on and soforth, are increasingly going to
(24:41):
adopt these models. And we're not going to have any
ability or any insight into what's happening.
And of course, you know, the same must be true of the private
sector. They've got the same drive
towards productivity themselves.And you know, people now talk
about workplace impact assessments to make sure that AI
(25:03):
is genuinely going to be used for the benefit of employees in
terms of augmenting their working experience rather than
simply throwing them out of a job.
Chris thoughts on this notion ofAI as the decision making
overlord. Very much so.
And as Tim says, it's why his bill in the in the public sector
(25:25):
use of Adm is so important because this covers the whole of
society and the economy. And back to the principles,
there should always be human in the loop.
Potentially you can move to a position of human over the loop.
And there should always be the right to a public explanation of
(25:46):
any decision. There must always be the right
to know that you're subject to an automated decision.
We say, for example, you apply for a bank loan and you get
turned down and is turned down by an automated decision and you
don't even know that that was inthe AI was in the mix.
That's bad enough. But imagine where you're in that
(26:09):
state context where the state, understandably, in exchange for
our protection, the state has granted extraordinary powers,
unique powers in any open society.
So imagine you have your benefits suspended on the back
of an automated decision and youdon't even know that AI was in
(26:30):
the mix. This is happening right now.
You don't have to imagine it. It's happening right now.
And it takes us right back, Michael, to your initial
question, really the sense of, well, why?
Why do we need legislation? Why do we need regulation?
Because all of these instances be you, a benefit claimant, a
(26:51):
job seeker, the creatives, a teacher in education, transplant
patients, a voter. This is not hypothetical.
This is not something for the future.
We're not having to contemplate legislating or regulating for a
thing that is coming down the track.
This is happening to citizens. This is happening to individuals
(27:15):
right now. So this next question is from
Greg Walters. And he says, do you feel the UK
is better off building a shared AI alliance with the US based on
innovation, agile regulation andsafety alignment rather than
trying to conform with the EU slower, more centralized
(27:40):
regulatory framework? So EU versus US in terms of
international alliances, who wants to grab that one?
I don't think the EU got it completely right on their AI act
it it's too disjointed really interms of how it operates because
it's got a sort of almost like aspecial break out for large
(28:02):
language models. Whereas actually the risk
framework should apply to every form of AI and and the risk
assessment should apply accordingly.
But I don't, I think it's slightly a false dichotomy to
say, does the EU need to, does the UK need to follow the EU or
the, the US? Because currently, I mean, as
(28:23):
Chris earlier said, the US has awhole series of states which is
beginning to line up different forms of regulation.
California, Colorado, you know, the list goes on of states that
are beginning to first of all deal with the whole issue of
transparency, but also the question of, of what AI does and
(28:43):
the risk and this kind of guardrails that are appropriate.
And there has been a an awful lot of bills trying to get
through Congress over the years as well without success.
But there's clearly going to have to be a point where, you
know, different in in order to, if you like get it right across
the states instead of having it between different states, you're
(29:05):
going to have to have some kind of federal legislation.
So, and we don't know what that that's going to be.
So I think the UK is rightly, you know, following its own
path. But what we have to have, we do
have to have guardrails. They will have to be risk based.
And we do have to adopt pretty much the same standards that
NIST is advocating that other standard setters like the
(29:32):
International Standards Organization, I triple E the EU,
are all espousing. Indeed, the OECD is trying to
put it all together and get convergence between the
standards. So I actually think that if only
we thought rather clearly and maybe adopted what Chris has put
forward in his bill, that we'd actually find ourselves in a
(29:54):
much better place. I, I, I think this argument
about whether or not we're goingdown AUS or an EU track is not
going to be particularly helpful.
No, I agree with that. I, I don't think as Tim
identifies, there isn't AUS track and there isn't really,
though it would appear so there isn't an actual EU track either.
(30:16):
When it, when it comes to it in the UK, what do we have?
We have an opportunity because of our common law tradition, to
reach out to our friends in the EU and the US and right around
the common law jurisdictions of the world, to put in place
principles based legislation which can develop over time and
(30:42):
can deliver for innovator and investor, consumer and creative,
and ultimately for citizen. It's why I brought my AI
Regulation Bill to bear in November 2023.
It was in good shape. I got it through all stages of
the House of Lords. It was set to go in the House of
Commons until somebody thought it was a good idea to call the
(31:04):
general election. Well, we all know how that ended
up, but I brought it back just over a month ago and I still
believe that we have, if not a unique opportunity in the UK.
We have an important opportunityto play amongst friends to bring
into bear cross sector legislation which can give a
(31:27):
coherence, A clarity and a consistency of approach.
Well, that's what you want if you're a citizen or if you're an
investor. Totally possible for us to do
that. It's unfortunate right now that
the UK Government are, shall we say, increasingly reluctant to
(31:49):
take such an approach. Let's jump to another question.
You can see I really prioritize the questions from the audience
over my own. And this is from Funke, Abimbola
and Chris. I'll direct this one to you to
start. And she says a risk based
approach is the best way to go factoring in proportionality.
(32:13):
We have numerous examples of such regulations in place and
should adopt A similar approach in the UK.
And I find this notion of proportionality to be
particularly interesting becauseright now on the world stage
stage, there's a lot of discussion of proportionality
and it seems the entire definition of proportionality
(32:36):
has completely been blown up. So thoughts on this, Chris?
My bill to get back to that verymuch has this conception of risk
space, but proportionality running through.
Let me give one example to give the sense of why proportionality
is such a useful, legal and indeed underpinning that social
(32:58):
construct. So I suggest one of the clause
in my bill is all organizations developing, deploying or using
AI should have an AI responsibleofficer.
Now before anybody thinks overlyburdens compliance booth a dead
hand on any innovator, any business, any scale, any growth
(33:20):
because of the proportionality clause running through the bill,
don't think of individual, don'tthink of group, don't think of
team, think about role, think about function.
So for those micro businesses just starting off, they will
obviously have a proportionatelydifferent approach to satisfying
that AI responsible officer thansay you know, a business of
(33:44):
20,000 employees with multiple sites right around the UK,
nevermind internationally. So it's an it's an important
principle. I think we can bring it to life.
There's another principle which appears a lot of other
legislation in the UK of reasonableness.
And again, people sometimes whenyou first come to it, struggle
(34:06):
with that concept being, well, what's reasonable, what isn't.
But that's the joy of drafting in that flexible, agile,
developmental way, because quiteright, what is considered
proportionate today may be very different to what's considered
proportion in 10 years time, butquite right too.
(34:27):
So thus, the statute or The Reg has the potential to have that
developmental nature within it because of the joy of that
agility and flexibility of English common law.
We have a question from Vanity Osmani, who says how will the
need for a sovereign AI affect the possibility of a global
(34:48):
agreement on AI regulation? Sovereign AI is actually
important because I think we need capacity to create our own
models. But I don't think it's going to
be very helpful for us simply toexpect to be doing large
language models because we don'thave the AXA compute, we don't
have the the chip, the powerful chips.
(35:12):
We don't have, except in the case of health data, we don't
have the huge data sets that have been accumulated by some of
the AI developers. Do think that in terms of open
source development, we can really be, you know, pretty high
(35:32):
up there in terms of world rankings.
And I think sovereign AI to thatextent is important.
But for my money, more importanteven than sovereign AI, where of
course our major universities have got, you know, great skills
and there are spin outs and start-ups and so on, is
sovereign cloud. And what we lack is sovereign
(35:52):
cloud in the UK. And cloud is the platform for so
much that goes on and we haven'tgot that.
And it's dominated by two or three major US big tech
companies. And again, that's another
example, as we talked about earlier, of American dominance
in this area. Now would be an excellent time
(36:15):
to subscribe to the CXO Talk newsletter.
Just go to CXO Talk and subscribe.
We have incredible shows coming up and just this really amazing
library of discussions just likethis.
OK, we have a question from Isabelle Doran and she is CEO of
the Association of Photographers.
(36:35):
And Chris, she says, do you think that the importance of
ensuring UK digital sovereignty,we should be pursuing our own
agenda without the US and EU, given the EU appears to be
rolling back on guardrails for its citizens, is it possible?
(36:55):
Yes, I think there's a real opportunity for the UK to say
how we want things to be in an interconnected, in an
interoperable, in an internationally connected way.
But yes, to say these are the principles on which we base this
legislation because these are the shared principles on which
(37:16):
we've based our society and our economy.
And if we don't, we'll end up potentially on one hand being a
rule taker from across the pond.Well, taking something else as
well at the same time, neither beneficial for UK creatives.
(37:37):
And we have a way to use the technologies to solve for the
technologies. If we think about what we can do
with a combination, say for example of the metadata
watermarking and fingerprint, for example, in combination,
there's a real opportunity thereto offer real effective
sustainable technology proof solutions for our photographers
(38:02):
and all of our great creatives. And bear in mind folks, our
creative industries in the UK and it's a picture mirrored in
many other jurisdictions, are growing at twice the rate of the
rest of our economy, 126 billion.
It's worth thinking hard about what we do to support all those
great creatives who not only do so much for our economy, but do
(38:26):
so much, frankly, to nourish oursouls and make the life the
beautiful thing that it truly can be.
I could just add to that what wemustn't do, and I totally agree
with you, Chris, what we mustn'tdo is sacrifice all those hard
won creative rights on the altarof some kind of growth strategy.
(38:46):
You know this, this. That would be utterly futile at
the end of the day, and counterproductive.
Let me toss out a question to either one of you, digital
assets and tokenization. Why is legal clarity around
digital assets, crypto currencies, NFTS, tokenized
property important for innovation and market
(39:07):
confidence? Either one of you very quickly
please. If you bear in mind, whichever
stat you take from various consultancies, it's likely that
around 80% of all value is goingto be exchanged by tokens by
20-30. So this is an extraordinary
opportunity for any economy, forany jurisdiction.
(39:30):
And again, what do we need? That clarity, that certainty,
that consistency, What delivers then the confidence for people
to invest in, for people to innovate around, for people to
develop platforms, for people todevelop tokens of themselves.
So in brief, in the UK, we're just bringing forth the new
property digital assets bill. Tim and I both sat on the
special bill committee for that.We've got report stage in about
(39:53):
3 weeks. What that seeks to do is give
that clarity that digital assetscan be considered as property.
So really delivering to that certainty point, which enables
investment, enables innovation and again, to the growth point
that Tim rightly raised, the government here and around the
world talks about growth. Well, who wouldn't?
(40:15):
But if you really want to consider.
Where growth is most likely to positively come from in the
shortest time in a sustainable, effective and emancipatory
place. Where it's going to come from
these new technologies and it's going to come from everything
around digital assets and tokens.
There's a great deal of impatience in the industry about
(40:36):
the fact that government really isn't picking up the ball
quickly enough. We may be identifying the need
to identify exactly what is a digital asset and what is not
and the legal definition. But, you know, as as an economy,
we're not really moving fast enough on this.
Tim, can you give us one sentence on open banking, open
(40:57):
finance and initiatives, why they benefit consumers and what
should be done there and literally just very, very
quickly, please? We've got legislation going
through on really trying to rollout the concept of open banking
in terms of joining up for the benefit of the consumer.
(41:18):
A lot of the data that is held on them by different service
providers, which happens in banking.
And the idea is to try and make that available to a a wider
group of businesses is in terms of the consumer and so on and
services provided. And that many people think is a
great way forward. I have my I think most
(41:42):
legislators do have concerns to make sure that the data is
firmly secure. Because, you know, if you
imagine the idea of open bankingwhere different financial
institutions have your data and it's all designed to be for your
benefit in a personalized this kind of way, you know, your four
O 1 or whatever it might be, Michael, but you know, this is
(42:06):
going to be an important set of services.
And you know, This is why we need legislation to make sure
that there are ground rules around this, that it is secure
and that the consumer really does.
And if it comes to the citizen, it it the citizen is also
benefited. Tim, just literally one sentence
(42:27):
on the importance of open sourceLLM models in this broad scheme.
Very quickly please. I'm really keen on open source
models because you know, even ifit's off the back of LAMA or
DeepSeek or whatever, it gives the smaller developer the chance
to use the power of those big models to and using their own
(42:50):
particular data set to really move forward in a whole bunch of
places, particularly healthcare,where this could be game
changing in many ways. Chris, let me jump to you.
And this is a question from doctor Carolina Sanchez
Hernandez. And she says from a governance
and assurance percepts perspective, how do we agree on
(43:14):
best practices within this tech race?
And if you can answer this literally in one or two
sentences, we are going to run out of time.
By the approach that we should always take to legislation,
considering the context which we're legislating, ensuring
meaningful public engagements. Clause 7 in my private members
Bill is all about public engagements.
(43:36):
The most important clause. Getting that really good.
And again, technology enables such an opportunity to transform
how we engage with the citizenry, how we engage with
the public. So having that, having all of
that in place gives us the best opportunity to come up with the
right results. But again, underlining on a
cracked record, but doing it in the legal code in which we
legislate gives the agility, gives a dynamism so we can
(43:59):
develop over time. Tim, we have a question
specifically directed to you from Bridget E, who says even if
the UK regulations are eventually risk spaced risk
based like the EU AI Act, will they keep U with rapid changes
in technology? Chris and I are great believers
(44:20):
in agile regulation. And if it's sufficiently agile
and sufficiently principal and outcome based, actually you can,
you can do that. You don't have to regulate for
every possible form of AI or every, you know, possible neural
network or however or algorithm.However you might define it.
(44:42):
If you define it by the risk andthe outcomes, then I think
you're in a much better place. And you know, we're used to to
outcome based regulation in the UK, principal based regulation.
This is not, you know, a foreignconcept for us.
And it's worked perfectly well in a number of different areas,
particularly financial services.And add to that the question of
(45:04):
sandboxing, which Chris and I are both very keen on.
That then gives you the ability to safely develop, to innovate
under the DDI of the regulator without transgressing the
regulations. And it means that you can
innovate safely. And then when you've got to your
(45:24):
beta point, your product that isfit for purpose, that's the
point when you can take off and you know that you're meeting the
regulatory requirements. So I I, you know, I'm a fan of
proportionate regulation and I do believe that we have the
makings of it. Let's move on to Michaela
(45:45):
D'mello. And Michaela says she's seeing a
fragile relationship between regulation of AI and the
baseline data ethics issues thatgo a step beyond regulation and
key and are key for building public trust.
What are your thoughts about this relationship?
(46:08):
As regulations evolve so quickly, how can we encourage
organizations of all sizes, innovators, etcetera, to choose
to be more ethical in their use of AI?
So Chris, thoughts on the the ethical use of AI and relation
to innovation. Please, ethical approach, not
(46:29):
only is the right approach, but ultimately it will always make
good economic sense because of the certainty, the reliability,
the solidity that comes from that approach.
So even if you only do it for economic reasons, would you
know, people would do it for broader reasons that even if you
do it for economic reasons, it'sa right approach.
Moving beyond that, you can talkabout responsible AI, which I
(46:51):
think is a very helpful conception.
And to the the earlier question,I think that sense of the role
of the professional, I think we can have professional standards
and the professionalization of the data science, the AI world
where people who work incrediblyhard come with great concepts,
(47:13):
great models in that world, it would really benefit from having
some professional standards, some qualifications recognized
around that. All of that would help.
And to the the Hong Kong point, I think I was down in Hong Kong
last spring. It's really interesting what's
happening anyway with the Hong Kong Monetary Authority in terms
(47:33):
of sandboxing around potential AI models in the financial
services arena. They're really using Hong Kong
as a Petri dish, not just for AI.
But back to our early discussionaround crypto and tokenization.
It's fascinating what's happening there, but all of it
comes back to that sense of another thread running through
(47:55):
all of this professionalization,ethical approach, responsible
AI, What ultimately we're sayingthere, we're saying that's about
standards. This is absolutely fundamental,
a public trust. If you haven't got public trust,
and we have had enough difficulty getting public trust
for the use of the data sharing and access in the so many
(48:19):
different ways. We've got to make sure that we
don't make the same mistake withartificial intelligence.
But we gain that public trust because that's what gives
everyone licensed to use it and to make our lives better.
Why, Right, Tim? I think it's why I always say it
is the most important clause in my bill because exactly it's the
(48:40):
point you made that if, if the public don't trust AI, then
they're not going to most likelyavail themselves of the benefits
and opportunities and simultaneously they're likely to
suffer the sharp end and the potential burdens of it.
And that would be utterly tragic.
But the great news is, if we so choose, it's utterly avoidable.
(49:00):
Here's a simple, very important question from Dawn Davidson, who
says given the unique insights of employees working directly
with AI, do you agree that stronger whistle blower
protections are urgently needed in the UK to ensure concerns can
be raised safely, particularly particularly as this technology
(49:24):
rapidly evolves? Chris, you want to jump into
that really quickly, please. Yes, I think whistle blowing has
an important role to play, better relations around all of
this stuff in the workplace. I think there are some real
opportunities with the Employment Rights Bill, which is
come to the Lords right now. So I know Tim and I are going to
be working a lot on this and thenumerous issues around that
(49:47):
relationship between AI and the employee and the employer
indeed. This is going to be one of the
big things in the future as AI increasingly is adopted by
employers. It's going to be AI in the
workplace and whistle blowing isa crucial part of that.
Tim, our salon Khan asks anothervery important question.
(50:07):
He says what about the digital divide?
Should countries that are far ahead in AI even care about
those who are far behind? Well, I think accessibility is
going to be absolutely crucial. And the of course the
interesting thing is that different jurisdictions have
very different approaches. I mean, you know, we all thought
(50:29):
that Africa was, you know, many sub-saharan African countries
were behind in technology, but then they leapfrogged us in
terms of payment systems, you know, using mobile payment
systems well before many of us got there.
So there is no Lion rule that says you can't adopt new
technology even if you're a developing country and you don't
(50:52):
have the actual access to, you know, every form of Internet and
so on. You know, mobile communications
are, you know, extremely sophisticated in quite a number
of countries that you wouldn't think that they had very
advanced economies. So.
But I do believe that governments have a duty to try
(51:12):
and make sure that their citizens have as much access or
have a level playing field, if you like, for access to these
technologies. And from my money, you know,
we've just had a digital inclusion plan published by the
UK government, which is broadly on the right track.
But I think there should be an absolute right to Internet
access, for instance, or at least to 5G.
(51:35):
And that needs to be, you know, the bottom line, really, of the
digital world. Vincent Cezara simplifies his
question for me and he says are people ever going to get an
apology for their social media chat, published artistic and
creative works that were stolen from them, taken without
informing them that it was to beused to train AIS?
(51:57):
He wants apologies for his work and other people's work being
sucked up into the LLM machine. Sadly, not from all the people
who purloined their works. But never mind apology, there
should be remuneration for all those works which made these
particularly large foundational models.
There should be remuneration, there should be respect and
(52:19):
there should have been consent. And that needs to be a three
pronged approach moving forward.Tim Lewis H says you.
I don't know if he means you or the global you meaning both of
you said that. You mentioned regulating
automated decision making using bank loans as an example.
(52:39):
But as you know, this kind of decision making is a constant
across AI systems. Isn't there a danger that clever
lawyers will find signing loopholes, much like accountants
do with tax law? Wouldn't it make more sense to
focus on regulating outcomes to ensure they're safe and
transparent, rather than trying to constrict the machine?
(53:00):
I don't think the two are incompatible, quite honestly.
You know, I believe in outcome based regulation and you know,
but on the other hand, I'm a lawyer, so you know, I, I, I do
believe in the, in the, if you like the legal process.
If there are flaws in regulation, then the regulator
isn't doing their job or the regulated finance service
(53:23):
provider is not doing their job,then we need to make sure we
have the remedies. So, but you know, we're back to
the proportion at regulation again, basically.
And Chris Anthony Scribignano comes back because he wants you
to know, he wants to make the point.
And he was, he was the chief data scientist at Dunn and
(53:44):
Bradstreet. So he's an expert in this.
He says malfeasance in the context of cryptocurrency is
often not covered by the same protections as Fiat based
currency. The best frost fraudsters change
their behavior faster than the best regulators.
Well, I think there's fraud in crypto as there's fraud in Fiat
(54:04):
and Fiat fraud still dwarfs crypto fraud without being
unaware of the difficulties withcrypto for getting the right
size regulatory framework for crypto and tokens is far more
interesting. As you know, Anthony, it's a far
broader area than simply crypto so-called currencies.
Getting the right regulatory framework in place, getting more
(54:26):
public understanding of these technologies gives us the best
chance to get the protections inplace.
And back to the early question Tim answered one word answers
that labelling. So if somebody is going to be
subject to AI in the machine or an automated decision, if it's
labeled, they'll know that's thecase and that will give them the
best chance to decide whether they want that or not.
Tim, to finish up, let me directthis first to you and then I'll
(54:49):
direct the same question to Chris.
What advice, Tim, do you have for policy makers regarding the
building of public trust in AI, digital assets, and open data?
Introduce proportionate regulation that basically make
sure that the the high risk AI is regulated.
(55:12):
That's be my advice, and that would create public trust and
give us the license to keep innovating.
Chris, same question. Advice for policy makers to
build public trust in regards totechnology and regulation.
Engage, engage, engage. We have what we need to make a
success of this because we understand critical thinking,
(55:35):
values, ethics, responsible approaches, economics,
philosophy. We need to be human.
These are tools. They're incredibly powerful
tools, but they're tools in our human hands.
We decide. We determine ultimately our
data, our decisions, our human LED digital futures.
(55:56):
And with that, a huge thank you to Lord Chris Holmes and Lord
Tim Clement Jones. Thank you both so much for being
here. I'm grateful to you both.
Thank you. Thank you, folks.
Thank you for your questions andfor watching and I hope Tim and
Chris, I hope you'll come back. I, I, I feel like you're both so
(56:19):
smart and the questions that I were, I was asking you don't,
didn't challenge you enough. And so please come back again
and let's do this one more time.Brilliant questions.
Everybody, thank you for watching.
We'll see you next time. We have amazing shows.
Check out cxotalk.com. Be sure to subscribe to our
(56:41):
newsletter because we want you as part of our community.
Thank you so much everybody, andI hope you have a great day.