Episode Transcript
Available transcripts are automatically generated. Complete accuracy is not guaranteed.
(00:00):
Tim, let's just review that alittle bit. More about the book.
So it's called Living with the Algorithm, Servant or Master, which is a
very leading statement to get us intothe topic today. And you talk about
AI governance and policy for the future. So the book navigates the delicate balance
between harnessing AI's transformative potential and ensuringrobust ethical governance, which you fully believe
(00:21):
is possible and doable and we havethe tools and the will now. It
provides insights for understanding the global landscapeof AI policy. Themes include ethics and
regulation, global perspective on AI governance, impact on democracy, public sector implications
IP one of my favorite topics,legal practice, and AI convergence of standards,
and future directions in AI safety andregulation. So Tim, just to
(00:46):
rest my voice for a second,a huge welcome to you, Thank you
very I'm rich and it's great tobe here. And yes, I do
cover quite a bit of the regulatorywaterfront, and we haven't even started talking
about autonomous weapons systems or live facialrecognition. AI is increasingly pervasive in our
lives and the use cases are sovaried that a lot of people have said
(01:07):
to themselves, we can't regulate this. It's far too diverse. But I
do believe and I think if you'dstart at first principles with the ethics involved,
and that's very much your approach inMkII I think if you start at
first principles, then you can developboth regulation and standards, and you can
(01:29):
do it not just on a nationalbasis, you can do it on an
international basis. And I think oneof the exciting things about our conversation today
is you've got quite an international groupof people on this call. And I
think what I wanted to do withoutbeing I didn't want to be overly UK
minded. But obviously a lot ofthe examples inevitably come from the UK in
(01:53):
terms of things like digital exclusion andthe kinds of regulation that might be appropriate,
or from Europe or but I dothink there are certain common factors which
can be drawn from this and whatevercontinent of the world you are, and
where I think legislators and governments andso on need to be much more aware
(02:15):
and much more active than they arealready. Being partly one of my main
motives for writing the book was asa reaction against very slow government action here
in the UK and then I reallygot a bit between my chief when I
looked at the Bletchley Summit and thought, what's the outcome of this summit that's
(02:37):
going to prove useful for the ordinaryperson in terms of protection against some of
the risks of AI. Because I'ma tech enthusiast, but I think unless
you get public trust and public protection, then we're not going to be able
to use the technology to its bestand fullest extent. So we have to
ask ourselves the question. And you'llsee this running through the book is not
(03:00):
weather, but we were not justwhat we do in terms of innovating it
should we be using AI in certaincircumstances. So there are moral issues involved
here as well. This isn't justa given that this is a technology we
apply right across the board without anythought. Yeah, absolutely, there was
(03:21):
a lot in there, Tim,And I think when we look at these
risks, we know right now we'redealing with biases and misinformation and harms.
As we look a little bit acrossthe horizon, we can see the risks
of job displacement erose in the middleclass. Of course that would come with
that, and we think about someof the older industries like mining where a
lot of jobs were lost in placesin the UK and the US and others
(03:42):
never recovered. And then even longerfrom that, we think about alignment,
we think about control. But we'vegot very much issues today tenth of May,
and we've got issues that are maybeten years away, and we need
to align our focus appropriately. Thinkjust thinking about the national international question for
a bit. If we took climatechange as an analogy for this, you
could look at the UK and soit's not really going to make a huge
amount of difference what we do inthe UK compared to China or other places.
(04:05):
But that's not entirely true, becausewhat we can do is we can
show leadership. We can inspire othersto show what's possible. Do you feel
that we're doing any of that inAI regulation safety? I think we've got
some of the institutions that are leadersin this field. I do think people
like the Turing are a great powerhousefor if you like tech analysis and understanding,
(04:30):
and some of the work they've donehas been groundbreaking, especially with what
was formerly the Center for Data Ethicsand Innovation in terms of their trust barometer
and things like that. And alsowe've got some institutions like the Ada Lovelass
Institute, which is, if youlike, a subset of the Neffield Foundation.
But they've done some groundbreaking work onthe kinds of regulation that we should
(04:53):
be seeing here and now in termsof mitigating the risks of AI of the
kind that you're talking about. I'ma believer that regulation is perfectly practical,
and if I didn't, I'd beone of the doomsters, because there's a
bad narrative. If we're not careful, the narrative runs away with you with
aar you get the Stephen Hawking narrativethe best or worst thing for humanity,
(05:17):
or you get the Elon Musk narrativesaying it's more dangerous than nuclear weapons.
If we're not careful, that's thenarrative that sticks. And that's why it's
up to legislators, regulators, peoplelike me, academics, opinion formers like
you. I think, to makesure that we have a culture and a
(05:38):
climate where there is pressure to haveregulation. Now, how you do that
internationally is really important because at themoment we've got a little bit of competition
in this area. The EU arehaving in a sense established the GDPR as
the Data Protection gold Standard was prettydetermined to make the AI Act the AI
(06:00):
gold standard, and probably because we'vebeen rather slow off the mark, that
is probably going to happen, basically. But the saving grace is that underneath
that, whatever level of regulation andwe're all going to vary in our cultures.
Our government has explicitly said that they'renot regulating for the moment. They
believe we need to evaluate risk moreand they're more into existential risk and all
(06:26):
this kind of stub wait till aartificial general intelligence comes down the track.
I don't believe that, but inthe meantime, I think it's possible and
it is happening to agree on internationalstandards. But the National Institute for Standards
and Technology in the State's SENELEK inEurope, British Standards Institute, the I
Tripoli, some private sector organizations,so they are coming together. We're not
(06:51):
quite there yet in terms of identifyingexactly, you know, how many key
standards we need and what the saymeal ingredients are, but certainly in terms
of a risk assessment framework that isbeginning to happen. The OECD has done
some really valuable work on that,and I'm slightly torn between saying it doesn't
(07:12):
really matter what the UK does,because quite frankly, we're going to have
to sign up to the AI Actif we're going to have access to a
market. For developer, for instance, is going to have access to a
market of four hundred and fifty millionpeople. And by and large, if
you're a public procurer, you're goingto find that the software that you buy
will probably conform to those regulations.But you can certainly, I think,
(07:38):
bank on the fact that in afew years time we're going to have a
set of common standards between ourselves,the States, and Europe. The bigger
question is how much broader than thatwe can go? Can we include India?
And I know we've got a memberfrom India on the call today Pakistan,
It would be fantastic if we couldget a sort of broad sign up
(07:59):
and bions of standards I'm talking aboutare yes, risk assessment framework, audit
training, testing, monitoring, continuousmonitoring, a series a suite if you
like, of standards which we couldexpect both predictive AI and generative AI to
adhere to. And I don't believethat's too pie in the sky. Frankly,
(08:22):
I just want to pick up onsome of the ways that you described
the narrative around AI. So isit about what gets regulated then, and
do you think that there's an intenthere from certain parties that they're just trying
to move the goalposts so that somethings can get regulated and leave a bit
of a free for all in otherareas. Hmm. That's a really tricky
(08:43):
one because in a sense you don'tquite know whether it's all about double bluff.
We've had people like Samill of OpenAI saying almost regulate me, but
on the other hand they've been quitekeen on signing up to things like the
Bletchley Declaration, the White House statementof the guidelines there. So it's not
(09:03):
entirely clear. I think it alldepends what the threat of regulation is at
the end of the day, andcertainly that's true in the competition field.
Let's not forget and you picked upon the IP point as well. There
are three really or maybe four reallycrucial areas. First of all, the
whole area of regulation, a broaderAI use, predictive AI generally of AI.
(09:30):
If you like the civilian use,then you've got lethal autonomous weapons.
The use of AI in those thenyou've got the intellectual property aspects, which
are really important, and you've gotto make sure that you're getting all that
right as well. And then ofcourse public sector is the fourth area,
(09:50):
things like life facial recognition, surveillanceby the police and so on and so
forth. So you've got to segmentthis a little bit in terms of thinking
about the different ways and different contextsin which you probably need to regulate AI
as well and have certain standards.Which of those two narratives that you mentioned
(10:13):
that are coming out of certain partiesbothers you more. Is it the narrative
that it's moving too fast so youcan't regulate it, or is it that
we should be focusing on the extentialrisk. Both of them seem like red
herrings, but they're both quite common, aren't they. Yes, And sometimes
exactly the same people put forward botharguments. So I say a curse on
(10:35):
both your arguments, basically, ifthat isn't a sort of perversion of a
quote from the Bible or whatever.But I mean, I do believe that
you've got to knock down these skittles. You've got to knock down the arguments.
That's what you know. Debate isall about. That's what Parliament's all
about. It's what democracy is about, is that we debate amongst ourselves and
(10:56):
we try and get the arguments right. I mean, our government is very
on the argument that bar and largeregulation is the enemy of innovation. I
don't believe that. I think regulation, certainly in terms of business certainty can
under innovation because you have the confidencethat you have a sandbox and all that
(11:16):
kind of thing. But at theend of the day, pretty much what
you're going to have to conform toin broad terms, if there are standards
out there, if there's regulation outthere, whereas if it's just a free
for all, it's everybody for themselvesin a sense, and you don't know
what the future holds really. So, although I'm a great believer in competition,
(11:39):
I think unbridled competition which has noregard to if you like the purpose,
the beneficial purpose of what you're tryingto do. And that's not the
direction that I would want to go. And of course I mentioned competition a
few moments ago. We don't knowhow many big players we're going to have
out there at the moment. Whathave we got five big players in AI
(12:03):
systems? Give or take it's allinto our own The Microsoft has got its
own AI operation now, but theyown a very large chunk of open AI,
and Tropic is owned. I thinkit's got relationships with Amazon. So
all of these major platforms AI platformsthat have relationships with the existing big tech
(12:24):
companies. To the title of yourbook, who controls the algorithm? And
I agree with you. In fact, we don't even need to say it's
our viewpoint. This is what yousaid a moment ago comes from Joe Biden.
He said in the US a coupleof years ago, he said,
we do not have capitalism, Wehave extortion because of the monopolies that are
in place, they don't allow realinnovation to take place. And I don't
(12:46):
disagree. So who does control thealgorithm? Then that's a very good question.
I don't think we have enough visibility. And I know one of the
key takeaways from your conference back lastyear transparency and accountability, and I do
think those two things are very muchmissing. You could argue, for instance,
(13:07):
you know that data protection laws havesome ability to control the data inputs
into AI and sometimes the data outputs, and our qualities legislation as well in
terms of bias and so on.But it's the transparency, which I don't
think there's really when you look atthe gap analysis of what is there in
(13:31):
terms of existing legislation versus what youneed to have, it's really a lot
of it's about explainability, transparency,accountability at the end of the day,
that we need to inject into theregulatory system, just on the data side
and the IP side a little bit. Have we missed the boat because these
platforms have already harvested and hubed uphuge amounts of copyrighted material to build these
(13:56):
platforms. They're now valued at multibillion dollar companies, they've got millions of
customers, and they've probably done enoughto be immovable now as entities. I
think the tallest horses probably jumped outof the stables. There are still quite
a few horses in there. Basically, if you're using the analogy of shutting
the stable door after the horses haveleft, I think we still have a
(14:18):
chance there. If you notice,the other day the Financial Times came to
an arrangement with open AI in termsof open ais licensing of material from the
ft as part of its training materialfor its chat GPT family, so to
speak. And I think that's avery good development. Of course, we're
(14:39):
quite fortunate in the UK and havingquite robust copyright law and quite robust courts
that enforced that. Now in theStates the fair dealing exception and it's more
difficult in terms of interpretation, andthat's why there's a heck of a lot
of litigation going on there. Andeven in Europe now the a I Act
(15:00):
in a way they've given something awayand the creative industries are not particularly happy.
There's a new text and data miningexemption which you can opt out of,
but it's not that easy frankly.But if you don't opt out,
then it is a text and datamining exemption that can be used by AI
developers to train their algorithms and you'rein trouble there. Whereas we don't have
(15:24):
that. We resisted. Luckily wegot there before the Minister signed off on
it and that was only a yearago. We resisted having a text of
data mining exemption through a campaign bythe creative industries. Now I suspect many
other jurisdictions are going to have aview about that. Look at the Indian
creative industries amazing in terms of novelists, in terms of films and music and
(15:50):
so on. And I hope forthe sake of the creative artists in that
jurisdiction, for instance, that they'relooking at their IP protection. As a
small anecdota, I was using anapp cord lenser where you upload twenty photos
of your face and then it startspulling you into all sorts of fun AI
poses and outfits, and one ofthem you can select is luxury clothes.
(16:14):
And when you get these photos back, you can still see the logos.
So you look down and you seethe Gucci logo in the corner of the
photo. So I haven't you evenmanaged to get rid of it? Wow,
it's embedded. That is that's quiteeffective watermarking, isn't it, which
we keep being told is not reallypossible. But there we go. Because
the whole deep fakes argument at themoment in a year where something like two
(16:37):
billion people are going to the poles, and of course India is going through
seven stages, I think of itspoles, whether or not there's control over
deep fakery as of crucial importance,yes, massive. Indeed, just to
give ourselves a moment to reflect,Jasel, could you maybe just give us
a few comments from the chat thingsthat are coming through, so we can
(16:57):
pick up on the little bit moreabsolutely, I think that chat is really
busy and it is interesting to seeso many perspectives and questions from people.
But at the moment, we willlook at the comments which are coming in.
I'll read some of them, notall of them, because we have
a time constraint here. So oneof the comments that came from alex is
(17:19):
my concern is how we keep ethicsfront and center in regulation of AI.
There are a few examples right nowwhere ethics have pushed have been pushed into
the background, weapons to Israel andother places, IP on social media platforms
including YouTube and Spotify, and personaldata ownership. There's another comment which was
quite interesting from Jazz. There iscurrently significant lack of congreance and coherence within
(17:45):
AI ethics, DEIB and ESD policyimplementations and also mutually between them. When
these clash, we get the Googleethnic Nazi debacle. Organizations do want to
risk that and so besides regulation,businesses are terrified of such pr disasters.
One came from Bugdanti regulations should addresssecurity standards for AI systems in national security
(18:11):
policies. Existing security frameworks are notadequate for AI deployed at scale. Similarly,
Nish mentioned product safety and liability isanother area of tangible risks in the
AI journal. So we are seeinga mix of comments of what people feel
about regulations, what are the risksareas that they are coming across, or
(18:33):
they feel needs most attention, Andinterestingly from Paul, we have regulation is
often literally linked to rules. That'sa pity because regulation also means processes which
create efficient and effective blow and Patriciahas mentioned the issue is also that those
sitting on the boards of AI companiesare the same. We have the same
(18:56):
people governing all AI. I thinkthis was really interesting and to end to
it, if you work like arobot, you will be replaced by one.
Some of the ALGORITHMICX base must havevalue contributed to it by human consciousness.
The way we maintain so Lani isas human beings. I think these
(19:18):
are quite a bit of comments thatcame from the chat. I can add
more and more, but I thinkthat will take away all the time of
it. Try to make just aquick comment about each about three particular areas.
I think we quite start. Whatwe're not trying to do is centralize
control over regulation. This is notgovernment controlling what AI is or is not
(19:41):
developed. It's mitigating. It's basicallytrying to mitigate the risks that arise from
different forms of AI, which meansthat certain forms of AI are not going
to be enabled. And the EUhas a very explicit risk based form of
regulation. We haven't yet adopted anythingalong those lines, nor of the Americans.
(20:03):
So there's going to be a debateabout how far you go in terms
of cautiousness, if you like,about the precautionary principle, really as to
whether or not you're going overboard bybeing too cautious in regulating. But that's
a debate to be had. Butit is not about saying too anthropic,
you will only do this form ofAI, because actually that's not how it
(20:25):
works. But for instance, itis about saying, look, if you
are going to produce deep bake pornography, you're going to get fined whether or
not you share that on social mediaor you are facilitating that with your software.
The second one is I am deeplyworried on the weaponry funt and I
entirely agree. We're seeing AI systemsnow being deployed and the latest is Gaza.
(20:48):
But also it's been true in Ukraineand in Libya the growing use of
autonomous drones and targeting systems AI targetingsystem is becoming now commonplace. And the
whole idea that you should have meaningfulhuman involvement in these kinds of weapons is
(21:10):
a crucial but be being eroded.And Stuart Professor Stuart Russell, who I've
got a great deal of admiration for, has been campaigning long and hard to
get international agreement on this kind ofthing. Now you may say, look,
that's a bit of a pie andthe sky thing, but we did
it with nuclear weapons. This ismore difficult because in a sense these drones
and the AI systems and all retailif you like, they can get in
(21:33):
the hands of bad actors much moreeasily. But nevertheless, we should give
it a try, and there areinitiatives at the UN in the Convention for
the Committee for Conventional Weapons, andwe should give that the best possible shot.
The third thing I would say isthat yes, I agree that the
implementation of regulation based on principles internationallyis very difficult, but the OECD,
(21:59):
let's not forget, got the gtwenty and its own members in the OECD
to sign up to a pretty decentset of principles back in twenty nineteen,
and in my view, it ispossible in a sense to circumnavigate the regulation
by saying, why don't we makesure that we have sets of standards which
(22:19):
incorporate those principles, and those standardsare highly practical, and as they're the
sort of things I've talked about earlier, then you can leave it up to
each individual country in a sense asto how tough to be with their regulation.
At the end of the day,it will probably come together, but
just take longer because politicians don't reallyunderstand a lot of this stuff, and
(22:40):
they think that if you regulate ina different way that may give you a
national competitive advantage of some sort,not really realizing that nearly everybody in this
field operates internationally in one form oranother, whether it's the academics with spin
out or developers wanting to exploit theirsystems abroad, whether it's multinational companies having
(23:03):
to conform to ethics and regulation.Of this idea that we can be exceptional
by having lighter regulation than anybody else. It is for the birds frankly,
and for me, the proof ofthat is the fact that the people have
the most the heaviest regulation on largelanguage models, on generative AI, and
(23:25):
now the Chinese in the private sector, large language models developed by people like
ten cents in Ali Baba are subjectto stringent regulation and that's more than any
other place in the world currently.Thank you, Jason. I just want
to jump on that last point.But are they looking to regulate for different
means than we are in China?Are they protecting in different people in different
(23:52):
ways when you talk about that,Yes, absolutely, But they signed up
to the Beijing Principles, which areexactly the same as the OECD principles back
in twenty nineteen. You might haveyour suspicions that this is for the benefit
of the government, that they don'twant to have competition from the private sector
(24:15):
or the private sector to use powerfulAI, which might impact on the Chinese
constitution if you like putting it inthat neutral way. But they would argue
that it's in line with the BeijingPrinciples, which are conventional ethical principles.
It's difficult to say, but itis a fact that they have got things
(24:37):
like deep fakes, pretty heavily undercontrol as we don't. We definitely don't.
And if you don't even know whata deep fake is, it's hard
to know. If you're looking atone. It's all right to dress up
in good cheap but going beyond that, Richard, that's the problem. They
weren't actually photos of me for therecords. Oh no, oh no,
I was doing it for a friend. But anyways, so we don't have
(25:00):
the climate change first mover problem.Then the way you're describing the one where
no country wants to put in thelegislations around the missions because if they do
it and no one else does,then their's GDP will suffer. Absolutely.
We'll not describing that, are youNo? And I can because and it's
very interesting the way you put it, because in a way the EU has
(25:21):
had the first mover problem and willhave the first mover problem. But they've
I think been incredibly agile. Theymanaged to alter the ingredients of the AI.
Actually take account of generat of AIbetween chat GPT coming out in whenever
it was twenty twenty two, andit's pretty powerful form. I think it
(25:42):
was three point five or four atthat time. But anyway, it became
obvious that the AI Act was notgoing to be fit for purpose unless it
separately dealt with generatly of AI,frontier AI and so on, and they
incorporated that and then signed off thisApril on the Act. Now, I
think that's in a sense, it'spropelled the EU to be much more fleet
(26:06):
of foot in the way that itproduces regulation. Their first mover advantage may
be sustained. Actually, what isthat advantage? And have they not lost
out on innovation and investment because ofthis? No, I don't think so.
I think that it's a bit likewhere everybody sucked their teeth when the
GDPR came out. Actually, butwhat happened Microsoft said, no, this
(26:29):
is the gold standard. We've gotto adhere to this. When you've got
four hundred and fifty million consumers,this is quite tricky now. And it's
not very different actually because even inthe data world, Okay, the EU,
they have the one of their officiallanguages is English, but nearly all
the so much of the English languagecontent is created outside the EU and in
(26:49):
terms of data and so on.But nevertheless, the GDPR has really stuck.
And I suspect that the AI Act. Don't forget, there's also that
the process called reflection, where youput an act into being and it won't
come in until twenty twenty six.You then have a three year period and
then you reflect on what's worked andwhat hasn't. And it's quite a good
(27:14):
process. And I suspect that therewill be changes that need to be made,
and actually probably at the moment thereare changes that are going to be
proposed to the EU GDPR as well. I think it's a pretty amazing thing
that you can legislate for twenty sevenmember states and still be as agile as
they are, quite frankly, andthey did, of course, they made
(27:36):
Apple change their phone charges. Sothat you're absolutely right about the power.
Oh yeah, absolutely, it doesn'tstop. It doesn't stop Apple and Google
trying to sue the hell out ofthe competitional authorities. But at least you
know where some of the power lies. Anyway. Yeah, and personally identifiable
data in the models is probably oneof the next big ones to get right.
But I take your points about chartingapath, showing what's possible, and
(28:00):
showing leadership agreat time agree with you. I just want to come to a
couple of things that were raised beforethe session because we've talked a little bit
about other countries and one of theconcerns I think that comes out across the
world is that these big AI modelssuch as chap see they're trained on Western
data. Have you heard those sortof issues being raised and how have you
spoken about them before? Yes,I think it's a real issue. It's
(28:22):
if you're not careful, it's anotherform of imperialism really yeah, colonialism,
and you have to be extremely awareof all that. But for my money,
that is that's a government that's agovernment issue in those jurisdictions, and
it's something that big tech need tolook after. I did. I had
(28:44):
a joint committee which looked at ourOnline Safety Bill which was all about social
media and so on. And theparallel is that we discovered that Meta I
have a much much lower level ofmoderation of message on their platforms outside the
main English language areas, for instance, Arabic Facebook has a much lower level
(29:08):
of moderation and if you like controlover what's happening between the users, and
that is a really bad thing becauseit means that what you're getting is you're
getting a much less safe surface serviceon those social media platforms. Now if
the same is going to be trueabout AI, which it probably will be,
(29:29):
it could well be it's one sizefits all, move fast and break
things approach. Then I think thatis something that governments have to deal with
as well, because you are you'restacking in inherent bias in there because it's
Western, it's purely Western data.What do you think it will take to
get some global agreements? And Iemphasize the word some there, and I
(29:52):
sense that you don't think that theAI safety summits and declarations are the vehicle
that necessarily will take us there.Don't mean to put words in your mouth.
No, I don't think the safetysome it is designed to do that.
Basically, if only they then didbuys talking about international standards at the
end of the Bletch League declaration,I would have actually brought some brought into
(30:14):
some of that. And we've hadwe've had some initiatives, the French Canadian
French and Canadian initiative, the GlobalPartnership on AI, for instance, I
thought might develop into a sort ofagreement on standards, or at least an
agreement to agree on standards. Butthe best forum at the moment, and
(30:36):
we've seen them do some really goodwork on things like sovereign wealth funds and
digital taxation is the OECD, andthat's why I'm quite an enthusiast actually for
the Global Parliamentary Group on AI inthe OECD, which Anthony Gooch who is
no longer with them, but he'sset it up and he's their sort of
head up. He was their headhoncho on communications and so on, and
(30:59):
he could see that the OECD couldplay a really useful role because it's a
very broad church and they are nowthey've published really good stuff on classification which
people use. They're publishing more andmore stuff on standards, starting with Risk
Assessment Framework and the commonalities with otherbetween various risk frameworks such as those in
(31:22):
the States and so on. SoI think if I was going to put
my money on it, and inthe education field, UNSCO has got quite
a purchase as far as I'm concerned. It's got a brand that people trust,
and therefore if they come up withstandards and so on for use of
AI and education, then I thinkthat's going to be pretty important if you're
(31:45):
talking international. Obviously, at nationallevel there are quite a few players and
I mentioned they to loveless earlier SurroundedySelden in the education field is doing a
lot. There's a lot happening inthe health field, but of course health
is one of those areas where youalready have quite heavy ethical screening of research
(32:06):
projects and R and D type activityin a way you don't necessarily need to
have a hugely different set of ethicsfrom the existing research ethics and health Okay,
And what would you see as therole for the UN more generally?
Oh, I think the UN reallyits main role is going to be in
(32:27):
the weaponry area. I'm not agreat believer in this Atomic Energy Authority stuff
that to the equivalent of this isinternational body that oversees the creation of generative
AI and frontier AI, and Ijust can't envisage. We're not talking about
world government here, for heaven's sake. We're talking about interoperability of standards.
(32:51):
But when it comes to arms,that is a different issue. You are
talking about control and you are talkingabout serious existential death if basically your warfare
is conducted by machines, because ofthe collateral damage that a machine is not
going to be able to assess theproportionality of the way that the current military
(33:14):
is meant and I say meant andI emphasize that. I've been to Northwood,
I've been to Strategic Command. I'veseen what the military do when they
go through the steps before they sayon a particular target is they have to
be They have to assess very carefullywhat the impact will be, what the
(33:36):
collateral damage is, whether that isproportional to the aim, to the war
aims of the particular campaign, andso on. And you do that on
the battlefield, and you have achain of command, and it goes all
the way through the chain of command, and it goes right up the chain
if the collateral damage is going tobe disproportionate and you don't take that decision.
(33:57):
Now, you can't do that withAI. It's much more difficult because
if your involvement is only token,you've got some targeting software that says,
these are the bad guys that we'veidentified and this is where they are at
the moment. In Gaza, you'vegot three different AI systems which are used
to identify, to find the place, then to identify whether the people are
(34:23):
in the place. It's quite complex, but at each stage I'm not convinced
that there's enough meaningful human involvement.There's a lot to do in that for
sure, and it's an issue thatgets closer and closer to us that we
need to we need to sort out. Yes, I mean I had that
(34:43):
was jail just for a second tomake sure that we've covered some of the
questions that we're asked in advance,and also see what else has been arising
on the chat. But maybe justa slightly more positive note, Tim,
I know how AI is influence mylife and made more time available for me
for my health and wellness, andit's been wonderful for that, especially for
an application that costs just a fewpounds a month. But when we think
(35:04):
about the broader world and artificial intelligence, we think about people living in countries
like Bangladesh and others similar to that. How is AI potentially going to improve
their quality of life? Do youthink over the next few years. I
think you have to, first ofall, start with asking whether people are
going to be digitally excluded, whetherthey will have the access to the internet,
to Wi Fi, to devices thatcan be used as platforms for AI
(35:30):
basically, so I think you've gotto start at the ground level. After
that, I think, you know, it could be fantastic because it means
that people could work from home toa greater degree using AI tools. You've
got apps now for almost everything inthe workplace, and it's very interesting,
isn't it. How quickly Microsoft,for instance, has introduced co pilots into
(35:52):
its suite of products, and itwill mean that, for instance, calculation
of what the climate is doing interms of and what crops you grow,
and even on a small scale level, you're going to find retail type things
that AI can deliver for an ordinarysme, whether an agriculturalist or whether they're
(36:15):
a shopkeeper. These things are goingto be going to be more and more
available. But it does need tocome a with consciousness that you must get
rid of digital exclusion and also muchgreater digital literacy. So people shouldn't start
running before they can walk. Inthat respect. Over excitement is fine in
(36:37):
the short term, but I thinkwhat we have to do is make sure
that people can take the benefit absolutely, and I think that's the key,
isn't it. Overarchingly, is howdo we get as many ships as possible
to rise with the tide that isAI and not just a few oil tankers
and I had a teach shirt withone of the sessions we were doing and
(36:58):
they were pushing back on a andreally quite anti using it and so on.
And when we've broaden out the conversation, we have to acknowledge that in
certain areas of the world, likethe UK, we are living in a
form of utopia. Now we mightall have mental health issues, but apart
from that, it's your topic.But the problem with that is if it's
propped up by other people living ina dystopia, that means that our goods
(37:21):
and services and so on are madein ways that make other people's lives less
enjoyable to the extent of trying tomake us more enjoyable. And I think
ultimately for me, that's the greatestopportunity in AI is that we don't have
to reduce living standards from the thirtyodd countries in the world that have raised
theirs up significantly, but we stoprelying on other people to fill those gaps
in ways that are harming their livesand their environments and their prospects. Yes,
(37:45):
I think that's a very fair statement. If you can mitigate the risks
broadly, it doesn't matter what jurisdictionthat it can be used as a leveling
up tool across the board. I'ma great believer in AI being used to
deliver the sustainable development goals, theun sustainable development goals, because I do
believe that, particularly in agriculture andin treatment of disease and so on,
(38:10):
the analytical ability and the ability ofAI so rattle through so many different possibilities
and identify the ones that are reallygoing to make the difference, or the
crops that are going to be appropriatefor particularly terrain, for particular terrain,
or identify where the real patterns ofpoverty are is great pattern seeker AI.
(38:32):
Yeah, I think it could begame changing, but we've got to understand
how this is going to be delivered, Yes, sir, Yeah, there's
a lot to do on that forsure. And I think I've just come
to Jason now in thirty seconds.But your agricultural one, I think is
a great example. And it doesn'tmatter how brilliant in indigenous knowledge we have
around the world. What do theyknow about farming in the face of climate
(38:53):
change? Nothing? None of usdo. And that's where we must absolutely
have the access to the data andthe recommendations. Happens that you're saying because
you just can't know that stuff,and about how close you are to the
land and your crops. Jeesus,you better catch us up. Have we
got a couple of questions that wecan come to from the chat? Sue
has asked, I did love toknow if Tim has any insights into water
(39:16):
Labor would do on air regulations worthyto form the next two KG government.
Well, yeah, this is aquestion that I'm often asked, and I
do know there's a great deal ofcuriosity on the labor front bench. Peter
Kyle, who is the decent shadowSecretary of State in particular, he's visited
(39:37):
the States to talk about AI,and I know that they do plan to
take this in hand. The onlyproblem is we don't quite know anything about
any detail. And the big problemfrom my point of view is that,
of course, manifestos are written atquite a bit before election times. They're
normally pretty sketchy. They normally dealvery much only with the absolutely key things
(40:01):
health, housing, education, costof living, that kind of thing.
They don't deal with this kind ofarea to any great degree. It may,
obviously, if for instance, wewere talking about social media, that
might be different. But AI ismuch broader than social media, and I
don't think it's people don't get upin the morning and say, what the
(40:24):
hell is this government doing about air. It's a relatively small community of people
who are concerning themselves with this.That doesn't diminish its importance in my view.
But it isn't retail politics basically,and it won't influence the outcome of
an election. So I think wejust have to hope. And of course
(40:44):
part of my job is as anotheropposition party, is to try and put
pressure by raising these issues, andbecause politics is a competitive sport basically to
some degree, and nobody likes tobe left behind. Obviously, when I
have a book launch, I invitelabor politicians and hope that they will read
my book cover to cover. Andit was partly to get opinion formers,
(41:07):
not just my own party on side, that I wrote the book because I
wanted to show what was possible andthe range of issues that when we talked
earlier Richard, the range of issuesthat we need to address. Thank you,
Thank you so much a lot,Tim. I think it has been
great to hear it. So honestly, what you think and how you can
(41:30):
also influence using your political experience tothe right people. Going on to the
next question, oh Matt James hasread has written what issues do you think
might arise from the differences between theEUAI Act and the US one four one
one zero Executive Order. I'm thinkingin context of the EU focusing more on
(41:54):
safety versus the US focusing on commercialincentive. Yes, but the Executive Order
of course, really applies to federalgovernment more than anything else. It will
have influence on procurement by the federalgovernment and therefore indirectly will have quite an
(42:14):
impact on the private sector and developers. But it's designed for a different purpose.
And again, sadly, I don'tthink there's time for any more congressional
activity really before the presidential election.But funnily, o AI regulation in the
States is relatively biparties and the numberof bills that have come forward hoping to
(42:35):
try and create some sort of regulatorystructure have been from different sides of the
House coming together. So I amoptimistic that there will be something, but
I don't think there's really much differenceat this moment which is going to impact
on the philosophy of it between theUS and the EU that I don't think
(42:55):
US, for instance, is goingto suddenly jump up and down on behalf
of open AI and anthropic and barredat Google and so on and say,
oh, what are you doing inthe EU, Why are you doing this?
I think that point is past,and I think we're going to watch
and wait and see how the EUAIAct pans out, which I think we'll
(43:17):
all be looking with considerable interest.But I come back to my standards again.
I think where the countries can reallycome together and where we avoid conflict
is by going for these international standards, irrespective of how different jurisdictions actually legislate
and regulate. I think, takingfrom your last statement, I'll take a
(43:38):
question from there and this will beour last question Richard. Keeping time in
mind, coming from Global South,I actually realized that we end up being
the test beds by the big techand developed countries. There's no major tech
transfer and also our bitrate element forexploitation. What are your views on the
(43:59):
about and how effectively can the speakermont I think we're back to Richard's point
earlier. I think this is upto different jurisdictions they and I think again,
the more muscle that you can puttogether with other countries and say this
is not good enough. You've gotto be much more conscious that you need
(44:19):
to tailor make your products for ourpurposes rather than trying to impose a product
which has been trained on purely Westernmaterial. And you can do that through
the OECD. I think that's aperfectly valid way of doing it. By
using other foura. The trade partnershipin the in the Pacific could be used
as part of that. Anything thatgives you greater muscle in negotiation, I
(44:44):
think is useful. But I thinkthe first thing that needs to happen is
that it's the politicians that need towake up really to the issues. Because
I set up my all party groupin Parliament with a Conservative MP very well
respective Conservative EMP, because I didn'tthink we understood enough about this in parliament.
And that was eight years ago,and I'm still not convinced we know
(45:07):
enough about it, and I'm stillnot convinced we're doing enough because people don't
understand enough. Thank you, Thankyou so much. Lot to come in,
Jo and so I think you've alwaysput it so honestly and just put
your views so strength back to you, Richard. We had actually on time
yeah, we should have mailed toa microphone that you could have dropped on
(45:30):
that last statement. Yeah, no, it's very powerful. I mean inclined
to agree with you, but Ialso want to mention the huge amount of
work that you have done and thehuge amount of progress that you have made
to raise the awareness of these sortof issues. And I'm sure anybody who'sho's
spent an hour with you, andI've spent even more time with you.
I'm very lucky sees that you havea very good grasp of the wider issues
and the implications. A huge thankyou from me in this community for doing
(45:53):
what you're doing, and I hopethat you will keep the strength and the
motivation to keep going with it,and we of course are at your service
for the mission that you so diligentlywork on. Tim. Thank you,
thank you, Richard, thank you, Jasil, and thank you everybody.
It's been in a great session.Really good to be with you. And
I really like the international dimension becausewe've got to crack this on an international
(46:14):
basis. Thank you. Thanks Tim. We'll speak again.