All Episodes

November 4, 2025 59 mins

Microsoft invested $13 billion in OpenAI. Amazon poured billions into Anthropic. The AI oligopoly is here.

Lord Tim Clement-Jones and Lord Chris Holmes, architects of UK AI policy, reveal what business leaders must know about Big Tech's grip on AI - from vendor lock-in risks to circular funding patterns signaling bubble collapse.🔷 Show notes and resources: https://www.cxotalk.com/episode/house-of-lords-members-on-ai-does-big-tech-own-you🔷 Newsletter: www.cxotalk.com/subscribe🔷 LinkedIn: www.linkedin.com/company/cxotalk🔷 Twitter: twitter.com/cxotalk🎯 KEY TAKEAWAYS:-- Why AI concentration mirrors 19th century monopolies-- The regulation vs innovation false choice-- Vendor dependencies creating strategic vulnerabilities-- UK vs EU vs US regulatory approaches-- Copyright battles over training data-- Board-level AI governance essentials⏱️ TIMESTAMPS:00:00 🤖 The AI Oligopoly and Big Tech's Influence02:27 🌐 Opportunities and Risks of AI Development09:03 ⚖️ Regulation and the Future of AI12:43 💻 The Risks of Overdependence on Big Tech15:33 🧠 The Importance of Media and Digital Literacy17:55 🌍 Human Values in Technology and Education21:25 📚 Education and Regulation in the Digital Age24:36 🌈 Optimism and Diversity in Technology27:13 ⚠️ Dependency and Vulnerability in Technology29:58 🤖 Navigating AI's Impact on Employment and Society34:46 🏛️ The Role of Government and Lifelong Learning in the AI Era39:16 ⚖️ Challenges of AI Transparency and Regulation43:07 ⚖️ The Importance of AI Regulation and Transparency45:00 🌐 Open Source and Diverse AI Models52:18 ⚙️ Balancing Regulation and Innovation in AI55:04 💰 Economic Interdependence in AI Development55:48 📈 Advice for Business Leaders in the AI Era56:54 ⚖️ Guidance for Regulators and Policymakers on AI💡 ABOUT CXOTALK:CXOTalk connects you with the world's most innovative business leaders, senior executives, and experts sharing insights on digital transformation, leadership, and innovation. Every episode brings you actionable strategies from those leading change in their organizations.#ArtificialIntelligence #AITransformation #monopoly #bigtech #cxotalk #houseoflords #inclusion #aiethics

Mark as Played
Transcript

Episode Transcript

Available transcripts are automatically generated. Complete accuracy is not guaranteed.
(00:00):
Microsoft has invested $13 billion in open AI.
Amazon poured billions into Anthropic.
Google controls your search data.
Meta has your social graph. The AI oligopoly is here today
on CXO talk number 899. Lord Tim Clement Jones and Lord

(00:24):
Chris Holmes, key leader shapingUKAI policy in the House of
Lords, reveal what business leaders must know.
Now I'm your host, Michael Krigsman.
Let's get into it. Tim.
Tell us about your work in the House of Lords and your focus
areas. I speak for my party on all

(00:46):
matters sound, innovation and technology, and I chair a cross
party group on artificial intelligence as well, which I
founded back in 2016. Seventeen.
And if one of the key issues forme is the power of big tech,
which really is increasingly, it's like going back in in a

(01:09):
sense to the big oil issues backin the 19th century where you
had vertical integration of an absolutely essential utility by
a limited number of businesses. And that's what we are seeing
increasingly. So already we had a lot, a lot
of big tech concentration. We've got, you know, control

(01:30):
over 2 platforms really, Apple, Google.
We've got cloud services really in the hands of say 2, Microsoft
and Amazon Web Services. We've got concentration.
We've got Facebook really, whichis along with side TikTok, one
of two really major social mediaoutlets.

(01:51):
So we're getting this concentration in digital, which
is now being replicated in artificial intelligence and
especially with the links between Open AI, for instance,
and Microsoft, between Amazon and Anthropic, and of course
Meta with its own large languagemodel.

(02:11):
So this kind of concentration isbeginning to impact in a major
way, which I think will have knock on effects for society,
for our democracy, for countriestrying to develop their own AI
models. Chris, tell us about your work
in the House of Lords and any initial thoughts that you have

(02:32):
on this big tech set of issues. I try and cover new technologies
including AI, blockchain, cyber and all things digital assets
and tokenization with threads ofESG and EDI running through all
of those. Also cover financial services
regulation and all things fintech.

(02:55):
And really what I'm trying to bring out are the possibilities,
the potential of all of these new technologies of themselves.
And then looking at some of the really exciting value cases
where you look at the technologies in combination with
one another. But we only win with these

(03:17):
technologies if we human lead onthem.
We only win if we focus on the public good, the common good,
and see them as enablers, empowers and really in all our
human hands. That's the opportunity.
But as we all know about opportunities, they're never

(03:38):
inevitabilities is down to us tomake it so.
Tim just described this concentration of Microsoft and
Open AI, Amazon, Anthropic, Google, Gemini and DeepMind.
You have Meta with Mama. What is this concentration mean
for competition and for society?It means concentration of

(04:05):
information sources effectively.I mean, one of the the key
things this isn't just about, you know, people accessing large
language models, you know, for the hell of it and and for use
in their daily life. And I entirely agree with Chris.
The opportunities are there, butthe risk is, is that what we're
doing is we're getting our information from very limited

(04:28):
sources. I mean, for instance, you know,
the impact on news media, uh, ofthe digital era has been
considerable with the monopoly of advertising that is now
pretty much held by Google and Facebook.
But of course, with the advent of artificial intelligence,
which, you know, in a sense scrapes news data from the

(04:50):
Internet, that means that concentration and that impact on
the news media is even higher. So it's, this is becoming of
great importance in terms of howpeople see the world and people
grow up and they get all their information from one of the
large language models, then theyare directly influencing what

(05:12):
we're doing and seeing. And I think that, you know, just
digressing slightly, but robotics is going to pronounce
that even more when AI is embedded in robotics that we use
for everyday life. So this this is not a small
thing. There's no way that we can say
that we win with these technologies if we just have a

(05:38):
winner takes all reality. There's no question these
companies, we all know the list of them you set out at the
outset, Michael. They've come an extraordinarily
long way in a relatively short space of time.
Their business model is clear, Their scaling has been clear,

(06:00):
and that is 1 particular model, 1 particular approach.
But we win with these technologies if we truly have a
plurality and understanding the AI isn't 1 technology.
AI isn't just LLM, which I thinkthe public narrative is tending

(06:22):
towards that sense. We've been on an AI journey and
we've now reached this point andthis is it.
This is the Zenith in many ways,large language models, that's
where we've been shooting towards.
That's but one element of AI in a whole constellation of
technologies. So I think each individual, each
company, each community, each country needs to consider what

(06:45):
do they want their AI? What do they want their
technology story, their technology narrative be?
How do they play to the elements, the values, the
beliefs, the social, the democratic structures in that
nation and really thinking deeply into this.
And that will be the way that wesucceed within this.

(07:07):
It can't be that we just go downa route of winner takes all.
Chris is absolutely right. Increasingly, I think, and this
is if we're not careful, going to lead to a kind of nationalism
as far as AI is concerned. There is this very strong mood
both, you know, in countries within the European Union but

(07:32):
also in the UK that we need to be able to develop our own
sovereign AI capabilities and that if we're not careful, U.S.
companies are going to drown outour own ability to develop AI
tools of, you know, every description.
And so this, if we're not careful, this is going to be a

(07:52):
backlash. And, you know, I want to avoid
that because I think we need to make sure that we take advantage
of, you know, every kind of toolthat is available
internationally. But if we're dominated by three
or four businesses, then I thinkthat's going to create a
reaction which would be extremely unhelpful.

(08:14):
And you know, or we've seen whathappens if you start applying
sanctions and tariffs and so on.People get rather excited about
that. And I don't believe that that is
going to help develop our, if you like, technological
partnerships across the world. Folks, we have two tremendous
guests today. Ask questions if you are

(08:39):
watching on LinkedIn, O your question into the LinkedIn
comments, if you're watching on Twitter, use the #cxotalk.
And we have some questions that are coming in already relating
to the international aspects andthe economic aspects.
And we're going to get to those questions momentarily.

(08:59):
But but I have a question for you both.
You've both raised certain typesof cautions.
You've described the potential very significant impacts on
society. But there's also another side to
this, which is these companies are taking the risks they that

(09:19):
you organizations have the capability to take in order to
make these investments in the LLMS, in the data centres and
infrastructure and so forth. And so frankly, given that
they're making the investment, why, what business is it of any
of us to interfere? Let them do it.

(09:42):
Let us all reap the benefits of AI.
History tells us that no industry sector, no market, no
society or economy is at its best if there are no rules, if
there are no regulations, if there isn't a sense of

(10:06):
certainty, consistency, coherence, stability.
Now with some way off that, I think with these current
approaches to AI, but everythingtells me from that history that
right size regulation would be extraordinarily beneficial, not

(10:29):
just for citizens, but cruciallyfor citizens, for consumers, for
creatives. But also, it's abundantly clear
that right size regulation is good for innovator, good for
investor because of that clarity, certainty, consistency,
stability that comes from that. If you wanted a huge
underscoring of that point, justlook to English common law used

(10:54):
around the world. Why?
Why used in countries? Why used in agreements where
people have no connection to theUnited Kingdom, have no
intention of coming to the United Kingdom, but they still
structure their agreements on English common law because of
the stability that that brings, because of the certainty,
because of the reliability of that.

(11:14):
So it clearly makes sense to have the right size regulatory
framework around these systems. If we look at the particular
model in the state, it's very clear that you can take that
approach in an economy in a market of that size, which is

(11:35):
pretty much take as much economic resource as you can,
take as much compute power as you can, take as much energy as
you can, take as much data as you can on one side of the
Ledger, Put that into the mix and then see what the outcomes
are. It's an approach.

(11:56):
The results have certainly been more than interesting, but it's
not the approach. There's a whole series of
different approaches looking at what you made, used to do with
that data before it even goes into the algorithm, there's
looking at how much of it's put in, what purpose are you getting

(12:16):
at? So, Sir, sure, it's it's one
model, but the claim that it is exceptional, that it is like
nothing that's come before in our societies, in our economies,
in our democracy, so they need to be left alone, unshackled,

(12:40):
unfettered. I completely agree with Chris.
And what we're seeing is incredible overdependence
beginning to arise. I mean, you, you only have to
look at Nvidia's market cap and some of the market cap and some
of the other big players. I mean, 5 trillion is a bigger
sum than the GDP of most countries apart from the US and

(13:02):
China. So, you know, we're talking
extremely large entities here. And that dependence on those
players, it risks a huge bust atthe end of the day.
You know, people have been talking about kind of marquee
bubble, market bubble. You know, the bankers are

(13:23):
already getting pretty nervous. You know, you talked about the
investment, Michael. And of course, that kind of
vertical integration in terms ofdata centres, compute power and
so on is exactly what the big tech companies are doing.
And they're deliberately designing it.
So the stack, if you like the, the tech stack is something that

(13:45):
isn't really very operable. You're once you start developing
your AI model, you're in their stack and it's not that
interoperable in terms of being able to ship to any other
supplier. So that itself is monopolistic
behaviour and you know, will lead to just two or three major
large language models being available to the public.

(14:07):
So choice will be limited and that will have a knock on effect
to governments. Already the UK government really
only buys from US vendors. The major tech companies like
Microsoft, Google, Amazon and soon.
UK developers get don't get a look in.

(14:28):
So this has major impacts on economies beyond the US and
societies beyond the US. That that concentration risk Tim
raises is absolutely critical. There's this be that the
economic concentration risk, be it the technology concentration

(14:50):
risk. And you only have to look at the
AWS outage last week, which was just a minor glitch.
You know, these things happen, but look how that rippled right
around the world because of thatconcentration in the small
number of players. It's critical for businesses,

(15:12):
for governments to really focus on that.
Tim Reilly raises concentration risk, economic technology.
It really it's, it's a, it's a very, if you're going to start
somewhere, it's not a bad place to start when you're making your
considerations around these issues.
We had a Microsoft glitch. So, you know, I think that's a
very good demonstration of that kind of over dependence.

(15:33):
We have a a very interesting question from Anthony
Scrifignano, our our mutual friend Tim, and he says your
points about big tech influencing how we see the world
is spot on. Isn't part of How We Win related

(15:54):
to our ability to teach the workforce and the user base to
ask critical questions about whythey should believe what they're
seeing from AI? Which of course gets right to
the heart of misinformation AI, automated misinformation and
disinformation, and the fact that you have programs now like

(16:16):
Sora that creates videos that are absolutely fake and look
absolutely real. Chris and I are both passionate
about media, digital literacy and you know, without that we
are even more vulnerable. But if we've at least if we have
some media digital literacy, which really government and

(16:38):
local government, central government educators, you know,
a whole range of people need to get involved in regulators as
well. And to make sure that we do
develop those critical thinking skills in the face of digital or
media, in the face of AI tools. Because, you know, especially
with hallucinations from AI models, we need to know whether

(17:01):
we're being lied to effectively and whether or not the the
information we're getting is correct.
You know, we might open a major mainstream media newspaper, for
instance, whether digitally or, you know, in the analogue form.
But that is going to be, you know, we're pretty sure if we
buy the right newspaper, it's going to be accurate.

(17:23):
We're not sure about AI. So we need a different set of
skills and we need to actually discern misinformation,
disinformation, deep fakes. You mentioned Sora Michael.
Absolutely. I mean, and so the risks build
up. And in the face of that, we have
to be, you know, we have to makesure we've got strategies for
developing digital media literacy.

(17:45):
And I'm sad to say that I don't think in the West, apart from
one or two notable countries like Finland and Estonia, we're
really are we getting it right yet?
Put the human at the heart of all this and to human lead on
these technologies, which means imbuing these technologies with
our human values and our principles.

(18:06):
Trust and transparency, inclusion and innovation,
interoperability and international outlook,
accountability, assurance, accessibility.
All good to be in the mix, but there could never be a more
critical time to have all of ourcritical senses and enabling
young people through schooling and through lifelong education

(18:27):
to have their critical facultiesto the fore.
And we've got a Children's Welfare and Schools Bill going
through the House of Lords at the moment.
I've got a number of amendments down there around these areas of
having not only this understanding of how Ed tech is
working in the classroom, but tohave hard wired through the

(18:49):
curriculum. Not just in compute subjects,
not just in sciences subjects, but hard wired through the whole
curriculum. This sense of digital literacy,
media literacy, AI literacy, data literacy, resilience,
character education, these are what all of our young people
need because that is what will enable them to have their sword

(19:11):
and their shield and will be a key part of enabling them to
succeed in this digital future now.
But how do you go about accomplishing that?
You know, Google has search data, Meta has social data,
Microsoft has productivity data.How do, how do you, Chris, go

(19:32):
about injecting that human inclusive element into these
companies that are relying on these enormous concentrated data
sets? How, How?
How do you do that? First and foremost, because we
have great good fortune of stillhaving the most splendid

(19:57):
stunning of supercomputers, the most efficient, the most high
tech, if you will, human brain, which gives us all of the
opportunities that we need. If we deploy it individually and
collectively as to what we want,how we want to bring in these

(20:19):
frameworks into legislation, into regulation.
And none of this means the heavyhammer of bans or controls In
that sense. It means having the right sized
regulatory approach because everybody flourishes if you have
that. And it means nothing short of
just having those human values running through all this.

(20:42):
Because the truth of it is, I dobelieve this.
I mean, maybe I have a touch of Panglossian naivety about
myself, but I think you only as an individual, as a company, as
a society, you only succeed longterm, firstly together, and

(21:02):
secondly, if you have those enduring human values wired in
these companies that we're discussing, on one level they're
doing extraordinarily well at the moment.
But look at 1900. As to how many businesses were
on the stock exchanges then to now, look at 1950.

(21:25):
Look at 1970. That's not to say we should just
wait and see. It means we should be very
forward facing, enabling our education system.
We've got a curriculum review which is due to report any time
in the UK that needs to have allof this wired through it all.
But we can lead on this and the worst thing we can do is to

(21:47):
think it's all too big, it's alltoo complex.
We retreat from the public sphere.
We need to to be even more present, even more connected,
properly connected, humanly connected.
And we truly then can be very cognizant of the risks and put
in the right frameworks there, but really put everything in

(22:07):
place to drive, to enable, to empower the upside.
Because in no doubt what we can do with these human LED
technologies in our human hands.We haven't been able to do
anything like that even a few years ago.
What we have to be very careful of is thinking of Gen.

(22:27):
Z and Gen. Alpha and so on as being digital
natives. I think it's actually quite
interesting how cavalier young people can be about their own
data. And you know, Chris is right, we
need to have that in, you know, all our institutions.
One of the big issues we have iswith another generation, our

(22:49):
teachers, you know, upskilling them is an absolute priority in
terms of understanding how they can then impart, if you like,
digital safety messages and critical thinking messages to
the, the, the young people that they teach.
But Chris is right, you can't dowithout regulation.
And you know, we're obviously experimenting with our online

(23:12):
safety legislation. There's been huge push back from
the tech companies. We're age gating at 18.
We're trying to make our Internet safe for children in
particular, but we are, you know, facing some really big
issues because how do you capture all the harms that are

(23:34):
occurring online? And new harms are occurring all
the time. You know, for instance, we don't
think we captured the use of chat bots online enough in
social media directed at children.
There are things like live streaming which we don't think
are fully captured properly. You know, there are to be agile

(23:56):
as a regulator is very, very difficult when you're
interpreting legislation that may not have caught up with the
new harms as they occur. And that is something that as
legislators, you know, Chris andI have to work with.
We're slightly impatient of regulators, but sometimes if the
regulator isn't at fault, it's us.

(24:16):
As legislators, we didn't encompass what we really needed
to encompass with the legislation and we we pay the
price for that. We have a bunch of questions
that are coming in. Let's let's jump to some
questions. But I just have to respond to
one comment that Chris made. He referred to himself as

(24:38):
Panglossian, as potentially having Panglossian naivete.
And I was thinking, Chris, as you were talking that you're
really an optimist. I'm a rational optimist, yeah.
OK, let's jump to. I'm a conditional optimist.
So we have a rational optimist and a conditional optimist.

(24:58):
I'm an optimist as long as there's regulation down the
track, you know? Well, what are you, Michael?
I am a realistic optimist. I'm not sure how that's any
different from either of you, but or a sceptical optimist,
because I've worked with so manytech companies over these last

(25:19):
decades and I understand the motivations and I understand the
pressures and the goals and the power of these companies and the
level of resources. I mean, the watchword for me is
diversity in so many ways that I, you know, I have a, I'm a
great believer in a diverse society because you get a range
of ideas, you get a range of creativity.

(25:42):
No, it isn't one-size-fits-all. It isn't conformity.
And if you find yourself focusedon only three or four major
businesses in any particular sector, you're getting a
conformity. You're getting, you know, a lack
of competition and a lack of diversity.

(26:02):
I once did an analysis of of major software companies who
were producing ERP products and I did a word analysis and it
turned out that across all thesemajor companies, the, the
marketing, the messaging was virtually identical because
there's so much movement among the leaders of these companies.

(26:27):
That's been so interesting. You see Microsoft acquiring, you
know, the, the, the ex CEO of DeepMind Googled, you know, and
so on. And the shifting between those
big tech companies is is significant.
If you're watching, this would be an excellent time to
subscribe to the CXO Talk newsletter.

(26:50):
Join our community. We will notify you of upcoming
shows just like this. Go to cxotalk.com and sign up.
Just take a second. Do it right now.
All right, we were on Twitter and now let's take some
questions from LinkedIn. And Jerry Kopitch has been
waiting patiently and he asks this question.

(27:12):
He says How insulated is the UK if the global AI economic bubble
bursts? We're pretty vulnerable
actually, especially our UK government because you know,
and, and Chris mentioned the, the AWS outage recently, we had

(27:35):
a Microsoft outage as well. These, this is, you know,
vulnerability on a big scale if the whole of your government
depends on three or four US supplies.
I mean, if Palantir went down for any particular reason, our
government in defence and acrossour health service, you know,

(27:55):
could well be in trouble becausePalantir helps to organize all
the data within those two services.
So, you know, we haven't mentioned them, but they are a
significant player internationally in those two
sectors and in the UK as well. So I, I, you know, this is, this
is no small thing in terms of, you know, what could happen in

(28:17):
the future and, you know, cybersecurity, if we've seen
vulnerabilities on a major scalefrom companies like Microsoft in
the past. And so again, if you only have,
you know, one or two major suppliers, whether it's
platforms for, you know, operating systems or, or, or

(28:39):
mobile platforms, you're again, you know, creating
vulnerabilities for yourself. Once bad actors find a way in,
then whole businesses are down. We recently had a major outage
for our biggest car manufacturerfor Jaguar Land Rover and

(28:59):
they've been down for months andmonths and that the cost of the
business is billions, a huge hole in their bottom line going
forward. Now luckily, you know, they're
owned by, you know, a major in Indian company and you know,
they can just about afford to gothrough it.
But their suppliers, parts and so on in the supply chain have

(29:23):
been really badly affected and had to be propped up by loans in
the UK government and so on. So you know this is not some
theoretical dependency. This is a huge issue.
Chris, we have a question from Arsalan Khan.
Let me direct this to you. And I think it's quite related
to this set of issues. Arsalan Khan says on Twitter

(29:48):
companies are blaming AI for their massive layoffs.
How can or should government even excuse me?
How can or should government even step in in this rapid
change? What can government do that
doesn't take 10 years? It's a really key point because

(30:08):
it takes us back to the human inall of this.
And if the human must be at the heart of the technologies, and
if the human must be leading on these technologies, then the
human in terms of quality, meaningful work needs to be at

(30:29):
the heart of all this. The key issue with all of this,
I don't think is considered enough, is how you lead on the
transition. I think that as with other huge
shifts in our time, that AI and other technologies will be job

(30:51):
creating. And that's ultimately fine, But
the transition as it has been when other industries have
started to wane and decline, youhave to as a government, you
have to society understand that transition and how you're going
to manage through it and lead through it.

(31:12):
Because nobody is going to accept huge layoffs with no
societal, no governmental level consideration on this.
And we've seen it with other industries when they have
started to disappear. That hasn't been fully
understood. It hasn't been properly dead on.
It hasn't there hasn't been the public debate, the discourse,

(31:32):
the how do we all go through this together?
We can't just have, for example,many businesses not taking
graduates newly qualified on. And that's already happening
massively across all of those sectors.
Well, how as a young person do you get your career underway?
How do you get to be that sort of for five years into your

(31:55):
career if a lot of the entry level work and tasks is being
taken over by AI? Well, there is absolutely a way
through that and that is focusing on the individual as
human aptitude, as human skills.But it requires leadership, it
requires a re imagination of howwork is structured, how job

(32:20):
roles are structured, understood, advertised for and
on boarded for. Because the reality is AI isn't
as such wiping out jobs. It's doing things.
It's doing particular tasks, it's doing particular part of
particular role. Critically important to
understand that. But that is the start point to

(32:42):
then understanding how to structure how to lead on the
labour market for the future, where, as Tim mentioned a few
minutes ago, the right way to doit is to focus on that diversity
and that inclusion and to lead on that.
And then we should be rationallypositive, rationally optimistic

(33:04):
as to how we can go about it. We do have a paradigm for this,
of course, in terms of how the car, the the horse gave way to
the car. And I, you know, often think
about that photograph of 5th Ave. in 19 O3 where there was 1
horseless carriage driving down 5th Ave.
All the rest were traps, pony traps, horse traps.

(33:29):
And then ten years later in 1913, you didn't see a single
horse and carriage going down 5th Ave. in that famous photo.
And what we have to, we don't know yet what jobs are going to
be there in 10 years time. And so we have to worry about
this lack of entry level jobs taking place.
And we have to be absolutely intent on upskilling in the way

(33:54):
that Chris described. And not just for the younger
generation trying to get jobs atthe moment.
We have to be upskilling, you know, the older generation as
well because jobs are being substituted or even in big tech.
I mean, Microsoft is shedding thousands of jobs currently as
we speak. And many of the other big tech

(34:14):
companies are finding that they can do their coding through AI,
not through using humans and so on.
And so, you know, they're noticing that and other
businesses, which may be even more, be more analogue, is still
going to have to shed jobs in order to remain competitive.

(34:35):
But it's because we don't absolutely know what new jobs
are being created. We have to be very agile in all
of this. And you know, we've been here
before and it wasn't a happy time.
What is the role of the government in this?
Are you going, are you advocating that the government
forces companies to to undertakethis reskilling?

(35:01):
Is it government programs? What's the right construction?
I think so they have to be more activist and there are some good
models out there. But, you know, recently
governments have adopted a kind of lifelong learning approach,
but without really giving it enough teeth and a resource in
order to do that. They've accepted the obligation
without really giving the tools to those who are trying to

(35:25):
deliver that lifelong skill set.If you like.
So you can retrain if your job becomes, you know, out of date
in terms of the tech technology,you should be able to upskill
with a grant or with a loan or whatever.
Now there that is coming into place, but it's not nearly fast
enough. And for young people, you know,

(35:46):
apprenticeships, particularly insome of the industries where
it's freelance, freelance working is the norm.
Again, you know, there isn't enough apprenticeship going on.
And the kinds of taxation that we have, it doesn't help with
that. So you know, we haven't yet got
our skills right, but it is something that government should

(36:07):
be doing and also obliging larger companies to undertake.
In schools have it threaded rights through the curriculum
and indeed further and higher education.
And then when we get into your career and the workplace itself,
bite sized badges, boot camps, all of those ends where you can

(36:28):
at speed in an agile manner, getthese skills, get these
competencies while you're in role.
So you have speed, you have pace, you have agility as you're
moving through your career. It's doable.
But ultimately, as so much, it comes down to leadership, and
that leadership should come verymuch from the top.

(36:49):
When you say come from the top, you're referring you from the
companies or from the government.
It should come right from the Prime Minister and then every
level down and then in companies, in all businesses, in
all organizations, as a leadership role for everybody to
take at their level of the organization to lead on this.
So from your perspective, this is AI don't want to put words in

(37:11):
your mouth. It seems a a major fundamental
society mandate if we put put itthat way.
Last year, 2 billion of us went to the polls and democracies
around the world. And yet, where was any of this
talk in those manifestos, in those party election broadcasts,

(37:36):
in that election literature, where was anything talking about
what could AI, what could blockchain mean for you, your
families, your businesses, your communities, your cities, your
country? And yet if we look at the major
challenges of our time, climate crisis, energy emergency,

(37:56):
migration, cost of living, we'reall unbelievably connected and
yet there's a loneliness epidemic.
Well, technologies have a lot tosay for all of those issues if
they are human LED technologies in our human hands.
What phenomenal manifestos parties could have written

(38:18):
talking to all of those issues of our time.
So many global in nature, so many existential in nature, and
yet human LED narrative, human LED technologies enabling that
narrative that could have been so positive to draw people into
the public discussion, the public discourse around how we
go about these new technologies.And yet, frankly, to find much

(38:43):
mention in any of those, you'd have needed to find my great,
great great grandfather SherlockHolmes to try and be deployed to
try and find much mention of this.
I mean, people would think of the first duty of governments
being security, if you like, butlivelihoods are part of that
security in my view. And you know, pathways to jobs,

(39:06):
education and so on must be a vital role for government.
I don't think that governments can just simply delegate that to
business. This is a question from Brian
Mosley. And he says how do we protect
ourselves from the impending weapons of mass persuasion?

(39:27):
And he says by Elon Musk and Mark Zuckerberg.
But let's just make this weaponsof mass persuasion, meaning AI
generated misinformation and disinformation.
Back to critical thinking, and also I think how data is
directed to you in a targeted kind of a way.
We all have to be increasingly careful about the use of our

(39:50):
data. The kind of surveillance
capitalism that Shoshanna Zubov talked about, where, you know,
basically you're giving your data to social media and then
social media is targeting you with advertising and so on.
Well, increasingly that's going to happen in terms of
misinformation, disinformation. And already is, quite frankly,

(40:12):
you know, for instance, you know, messages during election
campaigns and so on, and how much regulation is there of
those kinds of messages in most of our democracies?
So we're already, you know, in a, in a real, real problem.
And we have to, again, we can't just rely on regulation.

(40:32):
We have to rely on our own critical faculties.
Critical thinking, Critical senses to the fore.
Test, test, verify. Take that second, that third,
that fourth click and always be so present and so alert.
Because the government have a key role to play.
We have to be absolutely tooled up ourselves as citizens, our

(40:56):
swords and our Shields to the fore.
Here's a question from doctor Carolina Sanchez Hernandez, who
says, what do we do when no one is able to explain the outputs
of AI, when a person is unable to get a loan or insurance?

(41:17):
Because these are black box systems, very often the
algorithms. Timmy wrote a whole book about
this. The big challenge is oversight
going forward and also oversightof the entire AI value chain.
So what do we do about this black box element with that's
making key decisions over our lives?
Chris and I are passionate abouta regulation that makes sure

(41:42):
that you do get that element of transparency and you do get
redress if decisions are made bythis black box and you're not
informed about it or it goes it's biased, it, it, you know,
there's, there isn't a human in the loop and so on.
And, you know, there's no escaping it if we're going not
to be completely subject to AI decisions, whether by government

(42:07):
or private businesses like insurance or bankers or whatever
it may be. You know, we need to have that
level of regulation. You know, you can't just rely on
voluntary agreements. And, you know, we talked about
being tech optimists or tech realists or whatever.
Well, you know, I'm a, I'm an optimist, but it's conditional

(42:30):
on making sure that we have thiskind of regulation in place.
Because otherwise all we're doing is making sure that people
are living in a world which is completely opaque, where
decisions are made about them. They've got no control over and
no opportunity of redress. So that's not a world I, I want
to see. It's where literally, you know

(42:51):
the machine AI is our master, not our servant.
To quote the title of of my book.
It was a great advert there for Tim's book and I recommend it to
everybody. And he hasn't even offered me a
percentage for that endorsement of it, but it is a good read.
So if you haven't, do get your hands on it.
And similarly, I wrote a report on this subject earlier this

(43:15):
year and I entitled it 8 Realities, 8 Billion Reasons to
Regulate. And I just set out exactly what
this question gets to the personwho doesn't get a loan and
doesn't even know that AI was inthe mix, never mind that AI was
the rejector of their loan application.

(43:36):
I looked at the scammed, I looked at the teacher, I looked
at the transplant patients, people on the liver transplant
list in this country. AI is involved there.
Do they even know that's the case?
So First things first, know thatAI is there.
So labelling transparency, because without those two things

(43:56):
how can you possibly have anything which looks like trust?
Secondly, do not for one minute accept this argument that it's a
black box. It's just too complicated.
We can't know what's going on there because when the
organisations need to know, you'll find out that it is
possible to know what's going onand you can make those changes.

(44:17):
So the legislation I brought in,in draft in 2023 was too many of
these points to look at the right regulatory framework for
these technologies which we callAI.
And I think as a formulation, ifwe are principles based,

(44:37):
outcomes focused and inputs understood, that really puts us
the right framework to go about this.
And as you see that outcomes focused inputs understood, you
can see by doing that you reallysqueeze that supposed black box
in the middle. Let's jump to another question.

(45:00):
This is from Huawei Wong on LinkedIn and she says and she's
a she's a senior data scientist.So this is from that
perspective. And, she says, what prevents
innovators from being reduced todata tenants only in the Big

(45:20):
Tech's empire? I'm passionate about the open
source movement myself. And I think a way of escaping
from just being, if you like theslaves of of the big language
models in that sense is if you've, if there are open source

(45:41):
models which you can piggyback on and create new forms of AI
tools using that those bigger models.
I mean, you know, I'm not a hugefan of the of the Lama large
language model from Meta, but itis open source and it is

(46:04):
available by and large for people to experiment with, build
their own models and so on. That is not true of Claude or
ChatGPT in quite the same sort of way.
So, you know, there are the possibilities of being able to
use the power of models that have been created and that will

(46:28):
then create a diversity of AI tools that that would be
available. And I think that's really what
we look to in the UKI don't think we're going to have the,
the kind of compute capacity, the sheer sophistication of
chips and so on, which, and indeed even the financial
resources to create our own large language models in quite

(46:51):
the same way. So we're going to want to
piggyback on some of those. And I think that's the answer.
The, the data sets that we'd be using would be not quite as
comprehensive and scraped from every bit of the Internet in the
way that they are in the, the, the clawed and, and ChatGPT.
They would be more proprietary. And probably we, you know,

(47:11):
hopefully people will have paid for the rights to access the
data so that then there'd be benefit for those who created
those data sets. So again, I think that would be
beneficial in many other ways aswell, rather than the current
situation where people like OpenAI and Anthropic pretty much are
denying that they need to pay anybody for training their large

(47:34):
language models on what is oftencopyrighted.
I. Think open source has the
potential. It's not inevitable, but open
source has the potential to be the special source in this and
building on that, to understand that the approach and the models

(47:57):
that are built and deployed out of the valley, That's a
particular approach, but it's one approach.
There is a whole array of different ways of going about
this, a whole array of innovative ways of going about
this. If we consider where certain

(48:18):
data rests, if we consider some of the data sets that we have in
this country, if we really understand all of the issues
around privacy, if we understandall the issues around whose data
it is, who owns it, who has the rights to that data.
If we start to talk much more about the potential for the

(48:39):
National Data Library, the NDL on what could come about, there
many, many reasons to be rationally positive about this.
But we will get this fundamentally wrong in the UK if
we believe that the US model is the only model or we seek to
just simply copy or eight that model because we will just be

(49:04):
Coke light if we do that. And we've got our debate raging
in the UK, and I know that thereare many cases in the US,
Michael, where copyright owners are suing, you know, the big
tech companies for infringement by training them, you know,
material. They're training their models on

(49:26):
copyright material. And all those cases are yet to
come through, you know, to see what the final decision is.
But certainly, our government has had a lot of criticism for
trying to give a free pass to big tech companies in their
training. And it's going to continue to be
extremely controversial even when we come to new legislation

(49:48):
next year because we're due to have an AI I Act AAI bill come
through next year. And of course, speaking for
everybody in the US, we here in the US do believe that our way.
I'm not going to say it's the only way, but it's the right way
for everybody else. OK, moving on.

(50:09):
Very. Yes, I mean, but the, the, of
course, the approach to data is very different in the states.
And what we're thinking of more is how individuals can in a
sense collect the data with their in within communities, if
you like, but for public benefit.
And but it does mean that they will have greater control over
their own the data that they're creating in their daily lives.

(50:33):
So it is a different approach. AI infrastructure is becoming as
necessary as electricity. Given this, should data centres
be treated as regulated monopolies?
I think there's a question whichis worth debating around a
number of elements of these technologies as to what's the

(50:53):
role of the state, what's the role of the business?
A word Tim used a long while agois absolutely right.
This sense of getting the right partnerships will get us to the
right place. I think they will be regarded as
a utility ultimately, like gas or electricity or whatever,
because they're such an important part of the
infrastructure, quite honestly. And also the cost of developing

(51:17):
these data centres is becoming so astronomically expensive.
Yes, the environmental cost included.
Yes. And there's also the impact on
electricity prices. I mean, here in the US, our
electricity prices have significantly increased over the
last year or two. You don't know anything compared

(51:39):
to the UK, Michael. Seriously.
And it's all jammed. Tomorrow we're told that small
nuclear reactors are going to bethe solution, but they won't
come on stream for, you know, a decade or so.
So you know, this is this is a bit pie in the sky at the
moment. Chris, I'll have to mention what
you said about what's going on in the UK to my wife, who

(52:01):
recently complained to me that our electricity bill had
doubled. Well honestly, you are.
You are still very much tucking into a nice pot of jam compared
to the UK situation, I promise you.
I think you better come on holiday here and just see see
what it costs in your holiday cottage, Michael.
How do we strike the right balance between regulation for

(52:22):
the protection of society versusprotecting new technologies?
What we have to do is understandthat this kind of mantra, that
regulation is the enemy of innovation, is not the way
forward. And what we have to do is
understand that the proper formsof regulation involve assessing
the risk, making sure that the standards you're imposing create

(52:46):
an interoperability with other the countries.
You know, we're not sitting there in isolation.
So you're not creating a kind ofOasis for your own developers
who can't then get out and sell their wares and sell their
products overseas. You've got to make sure that
you, you know, you're, you're, you're, you're regulating in an

(53:07):
appropriate fashion. And I would say that, you know,
you've got to have the risk assessment.
If you've got high risk AI beingcreated, you've got to make sure
that that has certain duties of transparency, accountability,
liability and so on and that theconsumer, if it's a consumer
facing has the right of redress.If it's business, you know, they

(53:31):
have other rights relative to that.
It's this is not some, you know,a terrible idea.
We regulate so many areas. Aviation, the motor car wouldn't
have taken off unless we'd had safety regulation for, for the
car, you know, a consumer products, most of them have

(53:53):
safety regulation and that's what we need for, for AI.
Open AI has lost billions. It survives only because of
investment from companies like Microsoft.
Anthropic survives because of investment from Amazon.
Can we make the argument that market concentration is a great

(54:14):
thing because it's enabling AI? And you can make the argument
that it's a great thing until itisn't.
It's something that a lot of us should be at least somewhat
concerned about, that economic concentration and the
circularity of funding and applying all the principles that

(54:34):
we know. So looking as to what you're
investing in, what is the returnon investment, applying all of
those good economic and businessprinciples that are well made
will give us the right answers and the right analysis.
And and of course, this is goingto be 1 great bubble at the end
of the day, I think we're all a little bit apprehensive about

(54:54):
this. Chris brought up this issue
which we have not touched on, which is the circularity of
funding, just your views on that.
It's such an important issue andwe were, but we're out of time.
Yes, absolutely. And this is the exactly why
people are worried about this bubble because, you know,
Microsoft is investing in open AI.

(55:16):
Open AI is buying credits from Microsoft.
And you know, NVIDIA is doing pretty much the same.
And so they're feeding each other economically.
And if confidence goes, and if open AI doesn't make any money
and isn't making a great deal ofmoney at the moment, then what
happens? You know everything goes goes

(55:37):
bust and you know it. It all depends on Microsoft
continuing with its mainstream business rather than with the
the large language model ChatGPT.
What should business leaders andboards of directors do in this
complex environment with shifting regulations, changing
technologies? Your advice please for business
leaders and boards. Get involved, get up skilled,

(56:01):
get knowledgeable, get involved in the debate and discussion,
understand what you're trying toachieve as a business,
understand how potentially thesetechnologies could help and
partner up and really connect and make it a team effort.
Know what your values are in themiddle of all of this, because
there's nothing like AI for making you question your values

(56:23):
in all of this. Know what your values are.
That's an abstract concept, potentially.
It is potentially abstract, but frankly, what is the purpose of
your business? You know, well, how do you treat
your employees? You know, what kind of trust do
your consumers do, your employees and your other

(56:45):
stakeholders having you? Well, often that's all about the
values that you as a business have.
And I think we forget about themtoo often.
Chris, what advice do you have for regulators and policy makers
when it comes to the set of AI issues?
Understand what your role is, understand how these

(57:07):
technologies are developing and how they're already being
deployed and look about an approach where you do right size
regulation and you understand the complexity of the needs of
the consumer, the creative, the citizen, the investor, the
innovator. It's not straightforward, but
it's completely doable. And we've seen examples in other

(57:28):
areas in the UK where we've doneit.
So we know how to do this, we need to do it.
And in the words of a Great American company, their best ad
campaign ever, Nike, ultimately,having done all that, just do
it. Tim, you're going to get the
last word advice for regulators and policy makers.
Resist the blandishments of Big Tech, who will tell you that,

(57:51):
you know, all regulation is terrible, they won't be able to
innovate and do business, but you have to resist that.
We know the resources that are going into this from Big Tech,
both in Brussels, London, Washington and so on, and we
need to be resistant to that. How do you resist?
They have so much money and resources to throw around and

(58:15):
share. How do you resist?
You resist by coming on CXO talkand influencing the influences,
Michael. That works for me.
And with that, a huge thank you to Lord Tim Clement Jones and to
Lord Chris Holmes. I am so grateful to you both.
Thank you for taking your time and sharing your expertise with

(58:38):
us once again. Thank you very much.
Pleasure as always. And I hope you'll both come
back. Everybody, thank you for
watching. Now before you go, subscribe to
the CXO Talk newsletter, go to cxotalk.com.
We have amazing shows coming up.Check it out.
We'll see you again next time every.

(59:00):
We'll see you again next time everybody.
Have a great day and. We'll.
Have fun with AI.
Advertise With Us

Popular Podcasts

On Purpose with Jay Shetty

On Purpose with Jay Shetty

I’m Jay Shetty host of On Purpose the worlds #1 Mental Health podcast and I’m so grateful you found us. I started this podcast 5 years ago to invite you into conversations and workshops that are designed to help make you happier, healthier and more healed. I believe that when you (yes you) feel seen, heard and understood you’re able to deal with relationship struggles, work challenges and life’s ups and downs with more ease and grace. I interview experts, celebrities, thought leaders and athletes so that we can grow our mindset, build better habits and uncover a side of them we’ve never seen before. New episodes every Monday and Friday. Your support means the world to me and I don’t take it for granted — click the follow button and leave a review to help us spread the love with On Purpose. I can’t wait for you to listen to your first or 500th episode!

Ruthie's Table 4

Ruthie's Table 4

For more than 30 years The River Cafe in London, has been the home-from-home of artists, architects, designers, actors, collectors, writers, activists, and politicians. Michael Caine, Glenn Close, JJ Abrams, Steve McQueen, Victoria and David Beckham, and Lily Allen, are just some of the people who love to call The River Cafe home. On River Cafe Table 4, Rogers sits down with her customers—who have become friends—to talk about food memories. Table 4 explores how food impacts every aspect of our lives. “Foods is politics, food is cultural, food is how you express love, food is about your heritage, it defines who you and who you want to be,” says Rogers. Each week, Rogers invites her guest to reminisce about family suppers and first dates, what they cook, how they eat when performing, the restaurants they choose, and what food they seek when they need comfort. And to punctuate each episode of Table 4, guests such as Ralph Fiennes, Emily Blunt, and Alfonso Cuarón, read their favourite recipe from one of the best-selling River Cafe cookbooks. Table 4 itself, is situated near The River Cafe’s open kitchen, close to the bright pink wood-fired oven and next to the glossy yellow pass, where Ruthie oversees the restaurant. You are invited to take a seat at this intimate table and join the conversation. For more information, recipes, and ingredients, go to https://shoptherivercafe.co.uk/ Web: https://rivercafe.co.uk/ Instagram: www.instagram.com/therivercafelondon/ Facebook: https://en-gb.facebook.com/therivercafelondon/ For more podcasts from iHeartRadio, visit the iheartradio app, apple podcasts, or wherever you listen to your favorite shows. Learn more about your ad-choices at https://www.iheartpodcastnetwork.com

The Joe Rogan Experience

The Joe Rogan Experience

The official podcast of comedian Joe Rogan.

Music, radio and podcasts, all free. Listen online or download the iHeart App.

Connect

© 2025 iHeartMedia, Inc.