Episode Transcript
Available transcripts are automatically generated. Complete accuracy is not guaranteed.
(00:01):
Stop the world. Welcome to STOP THE WORLD.
I'm David Roy. And I'm Olivia Nelson.
Today Live is pure awesomeness. We have the wonderful Alice
friend talking on artificial intelligence.
Alice is Google's global head ofAI and emerging tech policy, and
she's a big brain on questions like how do we regulate AI?
(00:25):
How do we encourage the best outof it while avoiding the
problems? How do we integrate it into our
economies? Alice views AI as a normal
technology in the sense that like electrification or the
Internet, it's something that weare creating with an exist
within an existing set of rules and standards.
It'll be adopted gradually over time, and ultimately it'll be
something we can control. Yeah, she's not a utopian.
(00:46):
She's not a I, on the other hand, one or the other depending
on what day it is, so it makes for an interesting conversation.
We covered the idea of embodiment as being necessary to
achieving a general form of AI, often known as artificial
general Intelligence or AGI. The meaningfulness of ideas like
super intelligence that would far outstrip us at any cognitive
(01:09):
task. And of course, what it means to
win or lose the global AI race. Alice also talks about the
policies that governments can put in place to encourage the
uptake of AI into our economies and societies, the impact on
jobs and the value of building sovereign AI.
What she calls having the keys to your own data, your own
computing power and your own AI capabilities.
(01:29):
Noting we are a strategic PolicyInstitute, it's worth
remembering that AI capability is likely to determine the
future of nations and therefore is very much a strategic
technology. Yeah, so it's not just a frolic
on my part, it's it's very relevant to to our core project
here at Aspy. We don't often quote Vladimir
Putin on this podcast, but as Putin said of AI in 2017, the
(01:50):
one who becomes the leader in this sphere will be the ruler of
the world. I rarely agree, but but in this
instance I do. And based on the effort that
many countries are taking, that view seems to be prevailing at
the moment. Alice is great on this because
she can philosophise with the best of them, but she also has
practical groundedness that comes with being fully immersed
(02:10):
in AI policy. We hope.
Would you enjoy listening to Alice friend?
I'm here with Alice, friend. Alice, thanks for coming on Stop
THE World. Oh, thanks for having me, David.
I want to start with where AI isgoing now.
On the one hand, we have optimists telling us it's going
to solve every problem on the planet.
On the other hand, we have pessimists saying it's going to
(02:30):
destroy us all. Investors are giddy, even though
the big labs aren't close to making a profit.
The best impartial guess, it seems to me, is, is that it's
going to be transformative. But it's very much up to us how
we manage the ups and downs of that.
What are the key features you feel reasonably confident about
projecting forward, say a decadefrom now, when you look at the
(02:54):
current trajectory that we're onwith AI?
Yeah, I think you summarised that really very well.
I think the next 10 years are really going to be about the
votes that humans cast about howwe adopt AI, how we use it, how
we deploy it throughout our economies, how we use it in our
personal lives and in our professional lives.
(03:16):
But you know, I think if historyis any guide in the history of
previous general purpose technologies, it really is
about, you know. How?
Do people incorporate this technology into the many, many
facets of our lives? And I think what we know is that
this technology is very powerful.
(03:37):
It's very broadly applicable. It can be integrated into many,
many areas in our lives. And that we are going to need to
go through the kind of process we've gone through with other
profound technologies in our history to understand its
limits, to understand its benefits and to understand how
(03:58):
we can best use it to our benefit and then also understand
how we control it most effectively.
And I think that's what the next10 years are really going to
look like. And I, I'm somebody that
subscribes to the AI as normal technology school.
I think it is much more likely that we are going to see that
(04:20):
process that is full of friction, full of human choices,
you know, full of unpredictable moments for sure.
But I think at every major juncture, human beings are
really going to be the ones making the decisions about,
about how we take AI on in our lives.
And so I think that's, that's what the the future is going to
(04:41):
bring us. And I think we're actually
living that right now, right? We're real time gathering data
about how are people using AI, you know, how are they using
different AI applications? How are they responding to it as
it's being integrated in their work lives?
Sort of what? We in industry would call the
enterprise level. And then how are sort of
(05:02):
everyday folks using it for their own reasons, You know, how
are they using it to to learn information, to learn new
skills, to think through thorny problems?
How are they using AI as something that can augment their
own their own productivity again, both at work, but also in
their in their lives as well. And they're just sort of a
(05:25):
million different ways that people can think of to use it.
And as people explore the technology, we'll learn all all
the different places and ways inwhich that exploration sort of
leads us to the ways it's going to also change the way we do all
these things. So it's a very exciting time in
my view, but also a time that I think we'll be, you know, much
(05:49):
more under our control than thanI think a lot of folks are
concerned about. I can already tell we're going
to have a fascinating conversation because I don't
subscribe to the normal technology view.
I, I, I got to say I, I think I've gone through the sci fi
wormhole probably further than you might have, but I, I
actually regarded as sort of categorically different.
(06:09):
So I suppose we'll sort of test that and perhaps at some stage
you can persuade me. But let's just start with a
couple of the, I mean, the projections forward that we tend
to hear about our artificial general intelligence and super
intelligence, both of which I think have at their heart are
kind of an assumption that this is not going to be like previous
technologies for various reasons.
(06:31):
But let's let's look at those one at a time.
And one look, I mean, there are so many different ways to come
at it that I'm going to pick one, which is a post that you
put on LinkedIn recently. I mean, frankly, the, the video
that accompanied that messed me up for about 12 hours afterwards
because it showed 2 humanoid robots playing ping pong with
extraordinary ability. And your point about it was
(06:53):
that, I mean, this is real progress in the robotics sense.
But the point you drew from it was that to achieve general
intelligence, you probably need some kind of embodiment of the
intelligence so that it's actually sort of doing things in
the real world, interacting withthe real world.
And you feel that that is somehow fundamental and
conditional for intelligence to become sort of generalised in
(07:16):
that sort of general purpose way.
Just explain why you think that is the case.
Yeah. So that post was about
artificial general intelligence and embodiment and me putting my
cards on the table, which is that I have always felt that we
aren't going to be able to achieve AGI, which has various
(07:38):
definitions by the way. But the the one that I think is
kind of the most useful is, is saying cognition that is
essentially the same as the kindof cognition that the human mind
is capable of. I don't think we're going to
achieve AGI until AI is embodied, until it it's in some
kind of physical system that caninteract with the world.
(08:02):
And the reason I think that is because that is so much of how
human cognition not only develops in each individual
Organism, but evolved over humanhistory.
So much of the human mind is about being a mind and a body in
the world, interacting with other minds and bodies in the
(08:24):
world and with the physical world and all of the laws of
physics and the laws of biology that we encounter.
And so much of our learning and so much of our understanding is
in that physical world. And, you know, this is a, this
is a big area of debate among artificial intelligence
specialists, among neuroscientists, folks that sort
(08:47):
of, you know, focus on cognition.
So I have a lot of humility about, you know, what any of us
really know. But it's always just struck me.
And, you know, and I also won't claim I got this out of nowhere.
Early in my own studies. I read Terry Winograd on this
point, who wrote a book called Understanding Computers and
Cognition, and he makes this argument about biological
(09:10):
structures and how part of learning and understanding is
really an interaction with the structure of the mechanism that
does the learning and the understanding right.
And then, you know, also, I'm a parent.
I've, I've watched small kids sort of figure out the world.
And it just strikes me that if what we're trying to build is
something human like, then you need to give it as human like
(09:33):
conditions as you can. And so I just come back to, you
know, you and I Co create our intelligence in some way because
we're social and our minds evolved to be part of a social
system. And the way we create meaning is
social as well. And so when we get into these
really philosophical debates andAGI about what is understanding,
(09:56):
what is the nature of reality anyway?
How can we know reality? LLMS are incredibly powerful
tools, but they are limited to the abstract representations of
language. And so I've just always felt
that you're going to need, you know, first multi modality,
(10:16):
which is where frontier models have gone in recent, you know,
in recent time. But you're going, I think to
need to get to to embodiment before we really start to see
something that's. That's AGI like.
We can also then debate measuring it.
How will we know we got to? It, but that's a whole other
question. Sure, sure.
(10:37):
Okay, well, look, I mean that makes a lot of sense.
And I mean quite apart from the accuracy of it as a truthful
statement. It also seems to me just on a
values basis, it's healthier if the intelligence that we are
creating understands the world and the way that it engages in
the world in approximately the same way that we do.
(10:58):
And that embodiment is part of that.
Otherwise, I mean, just to take a really basic example, if it
doesn't understand moving as a three dimensional object through
space, it might well not think about barrelling over a an old
granny on the footpath. If it doesn't understand that it
is a a physical being in a physical space, that's probably
(11:20):
a pretty lame example. But I mean, if you sort of
extrapolate outwards from that, certainly it's going to be a
healthier creation in terms of the way that it exists in the
world that we actually physically inhabit if it is
embodied in that sense, I suppose, and look at respond to
that as much as you would like to.
(11:40):
But one other thought that I have is that it's possible then
for very powerful intelligence to be created that isn't
embodied and therefore is doesn't have that sort of
generality that's attached to human intelligence in sharing
that experience of the world through embodiment, but is
(12:01):
nonetheless very powerful to thepoint of super intelligence.
But it's going to be kind of alien by comparison with a, a
general intelligence that's moreattached to human style
cognition. And that is where we could
actually get into danger. Does that resonate at all for
you? And does the concept of super
intelligence sort of resonate for you in some meaningful way?
(12:25):
Yeah, I mean, I think. Super intelligence is
fascinating as a concept. I think your use of the word
alien is quite correct because Ithink whereas AGI again grounds
itself on this human like cognitive ability, you know,
perhaps as smart as the smartesthuman ever to live or is as
(12:46):
smart as a combination of all the smartest humans ever to
live, right. We're all we're all sort of
circling around that kind of definition.
But in any case, it's comparableto human capacity.
ASI is about a speculated version of AI that is not like
human cognition that, you know, I think some would say exceeds
(13:07):
it and by every measure, but also maybe just departs from it.
And so alien is a great word to use because I think that is
where we get a bit into the realm of science fiction or at
least theoretical possibilities.And it is hard to see what the
(13:28):
empirical evidence is for us getting to that other than, you
know, sort of a logic chain as it's been described by Nick
Bostrom and others who who sort of think through.
What would ASI? Even look like and I think a lot
of analysts say, you know, we'reactually, we actually don't
know. We don't know what.
It would look. Like, because part of the point
is this recursive self improvement goes in directions
(13:49):
that humans wouldn't think of, couldn't think of.
And then you get into all kinds of weird conversations about
would we even understand what itwas doing?
Would it still be able to understand us?
Would we be relevant to it? Would it be relevant to us?
What kind of limitations might be on it?
We don't know of anything in nature that is limitless, right?
(14:10):
That is capable of all things. So, you know, I think ASI is
sort of a fascinating thought experiment.
And I also think that there are very serious people that that
think about it as a serious possibility.
But again, getting back to AI's normal technology, I tend, and I
think it's because I'm a social scientist, I tend to think of
(14:34):
the friction in the world and the hurdles to, you know, actual
technology development and adoption as as really meaningful
friction, sometimes in ways thatwe intend and design and
sometimes in ways that we don't intend and that frustrate us.
So you know, the direct answer your question is I think ASI is
a useful and coherent concept, but you know I think a lot
(14:57):
harder about the near term kindsof AI that we're fairly
confident is going to obtain as opposed to ASI that I think the
probabilities are a lot lower. Yeah, I find the chain of logic
though, fairly convincing and compelling.
The idea that something that is very smart, say at coding can
(15:19):
actually build smarter versions of itself in order to move ahead
very, very quickly, for instance, or even just on the,
you know, the human engineering trajectory that we're on at the
moment. You know, the improvements in
large language models, for instance, or you know, through
reinforcement learning, as we saw with A0 as an example, where
(15:44):
it basically just played againstitself millions and millions of
times until it it was able to beat the, the world Go champion.
Those sort of trends seem to me even before we get to ideas like
recursive self improvement, which for, for the benefit of
the listener just means basically machines that will
build machines smarter than itself.
And so I mean, the, the, the phrase kind of explains itself,
(16:04):
but even without recursive self improvement, it seems to me that
the improvement that we've seen over the last say, well, 15
years is such that it could, we could actually quite easily get
to a, you know, a state of superintelligence sometime in the
next decade. I mean, do you find this sort of
chain of logic, I suppose of it compelling?
(16:26):
I think the. Chain of logic is compelling,
but again, it also has to make alot of assumptions about what's
available to the system. It assumes compute at the
necessary scale. It assumes energy at the
necessary scale. It assumes, you know, infinite
inference capacity. I think it's making a lot of
(16:48):
assumptions about what's available to the system in order
to do this. Not only recursive self
improvement to the point of obtaining ASI, however you might
measure it, which again, is veryfuzzy, right?
But then also I think part of the concern about such a
scenario is sort of the sustained operation of such a
thing. And I think again, there's just
(17:09):
a lot of friction between here and there, not just in terms of
sort of there are physical limitations on every system that
we've ever observed, but also, you know, humans again, will
still have a vote and there willstill be human limitations
imposed on the system as well. Which is one reason that
thinking about these kinds of things now is very useful.
(17:30):
Because I think part of being a responsible actor and developing
AGI is thinking very carefully about how do we monitor these
systems for capabilities beyond what we're comfortable with And
how do we design mitigations? How we, how do we design
systems, You know, in the case of heading towards ASI, for
example, how do we make it so that it's incrementally
(17:51):
improving as opposed to the evolutionary leaps, right?
I think there's a lot of engineering and design still
very much available to us. And so these, these kinds of
speculations are useful for thatreason.
We can imagine a world that we do not want to get to and then
we can work backwards and say, OK, well, what do human beings
(18:14):
have to do to enact those mitigations?
And I, I just, I tend to have a lot more confidence in people.
I think despite all of us, I, I think that we have in fact, over
time managed quite well figuringout.
How to work? With and live with very powerful
technologies. I suppose to me, and I will get
(18:36):
us back on track in a moment, but the the difference for me, I
suppose is that the prospect of something that is cognitively
and intellectually more capable than us, which I think is a
unquantifiable but real possibility, means that it is
categorically different to any other technology that has come
(18:56):
before. Even general purpose
technologies like electrification or computing and
so forth. The prospect of something being
smarter than us is something that has literally never
happened in world history and, and to the best of our
knowledge, history of the universe, we to the best of our
knowledge, we are the smartest things in the universe.
(19:16):
And, and the the idea that theremight be something smarter than
us just means that it's it is ina different category.
Does that? How does that sit with you?
You know, I'm not an evolutionary biologist, but I, I
am sceptical of that argument that we are the smartest thing
that has ever happened. I mean, first of all, we have no
idea for the smartest thing that's ever emerged in the
universe throughout all time, right?
(19:38):
So we can't, we cannot make thatclaim with any confidence.
I also don't know that that isn't an overly anthropocentric
view that our intelligence is superior to every other form of
intelligence that has ever happened before in our planet's
history. I think there are many different
(19:59):
kinds of intelligence. I also think, again, back to
this idea that our intelligence is very social.
A lot of the power of human intelligence has been our
ability to come together socially.
And to combine minds. And so I think the the notion of
sort of mimicking one human brain and measuring against sort
(20:23):
of the the uniqueness of a humanintelligence is also a little
bit missing the point too. I think part of the success of
our species, if you will, has todo with the, the socialness of
our intelligence. Now you can think about systems
that, that mimic that pattern aswell for, for certain.
(20:46):
But I, I just, I tend to be waryof bold claims about nothing's
ever been smarter than us. And therefore and making any
sort of prediction about we therefore have any confidence
about what will happen if something smarter than us
arises. You know, we, we, we just can't
(21:07):
have that, that kind of confidence, I think.
All right. Thanks for indulging me there,
Alice. Let's talk about how AI can be
and is being used in our economies and societies.
Now the excitement tends to be around large language models.
Every time a new model comes out, it's front page news,
etcetera, everybody gets very breathless.
(21:30):
Is increasing commentary, though, saying that really it
will be about the way individualcountries and economies adopt
and integrate AI at A at a much more basic, prosaic and and
potentially dull level, rather than always being about those
frontier models. It's very relevant for a country
like Australia, which probably isn't going to be building
(21:51):
frontier models anytime soon. But we do have a high educated
population and A and a relatively advanced economy that
can benefit very greatly from AI.
How does that sort of commentaryresonate with you?
I mean, it certainly seems to tofit with a lot of what you've
been saying that it really in terms of the geopolitical and
the economic race around AI, it will be countries that that
(22:12):
adopt and integrate it, that getahead.
Yeah, I, I mean, I, I do think especially at at this point, you
know, the story of AI is going to be about how we adopt it in
the, you know, near to medium future.
I do think we need to, to continue to enable innovation
and R&D, you know, fundamental research about AI, you know,
(22:36):
continue pushing on the frontier.
So I think it's important for that work to keep keep
happening, but I also think that, you know, a technology
like that in its raw form doesn't really do us that much
good as a society. What does us good as a society
is using it for beneficial purposes.
And as I said earlier, you know,that takes a lot of work that
(22:59):
doesn't just happen naturally. That takes, you know, on the the
individual level, on the firm level, on the government level
and on the societal level, you know, figuring out useful
applications of AI. And again, there there's just
because it's a general purpose technology, there's a lot of
different ways that human beingscan use it.
(23:19):
And I think we're still very early days in that story.
I mean, just in Australia, I know Public First did a study
that came out in February that found that only about one in 10
Australians feel like they really understand how to use AI
tools. I'm sure that number is better
now because February and AI timeis like ancient history.
(23:40):
But, you know, you could probably cite similar statistics
in other parts of the world as well.
People are just starting to figure out how do I incorporate
this into my workflow? What do I find useful?
What's actually speeding up my day?
What's actually helping me? And then of course, across
industry, we're all sort of constantly introducing new
(24:01):
tools, new features, new ways that we're trying to figure out
that users appreciate it. There's a real race among
industry players, you know, to figure out what do users like
and how, how will users use AI? What's an intuitive interface
for AI? What's an intuitive way for
folks to use it? And to what end?
(24:22):
And again, I think we're early days and figuring out what those
things are. And some of the earliest, you
know, big leaps we've made are on the scientific discovery
side. They're for folks that are doing
sort of deep research using AI tools that involve lots and lots
of data that involve being able to to work through extremely
(24:45):
complex and formerly time consuming calculations.
But when it comes to how is thisgoing to affect our economies?
How is this going to affect the work in companies?
How is this going to augment theaverage worker?
How are people going to use it again to make their their daily
lives better, easier? That's a a long, you know,
(25:09):
detailed again, person by person, you know, sub
organisation by sub organisationprocess.
I do think that's where we're going to really.
See. The benefits of AI manifest,
right? So there's a lot of predictions
out there about kind of what is this going to mean for the
economy. That same Public First survey
(25:30):
predicted about $240 billion in economic value could be added
for Australia alone. And there are lots of
predictions about global GDP increasing by, you know,
significant percentages. And so if this is really, you
know, what we have in the offing, then figuring out how do
we obtain that I think is the next big challenge.
(25:51):
And that's an an adoption story.It's not just the raw technology
itself. So the Australian government has
recognised this, it seems to me,and I mean, they're talking
about this is one of the big solutions to our productivity
problem, which like, I mean, like a lot of economies right
now, Australia has pretty flat lining productivity and has had
four, at least a couple of decades outside of, you know, a
(26:14):
couple of sectors that do explain those gains that have
been made. And so our treasurer and our,
our government has been talking about AI as a, as an obvious
solution to this. I just want to quickly say, I
absolutely recognise what you'resaying there about things like
scientific discoveries and solving those big mathematical
problems that are just time consuming.
I look at something like Alpha Fold, which is solved protein
(26:36):
folding, paving the way for all sorts of drug discoveries and,
and health discoveries and it makes a chat bot just look like
a, you know, a cheap toy. Really.
Those sorts of things are reallywhere a lot of the the huge
improvements to, to human welfare are going to come from.
What do you think are the key things that governments can be
doing to bend their their economies and their societies
(26:59):
more towards productive, healthyuse of AI?
Yes, there are a lot of things governments can be doing right
now. I think one of the most
important ways for for governments to engage on AI and
to really seize the opportunity of AI is to recognise all the
preparation we need to be doing now, particularly in things
(27:20):
where governments excel, where they can invest at a scale
that's really beneficial to their whole societies.
And that includes investing in skilling the labour market as
well. Something that we recognise at
Google is that, you know, there probably will be a labour
transition. And so the important way to get
everyone through the transition well is to really put a lot of
(27:44):
money into reskilling, upskilling the labour market, to
really be able to to leverage AIand its ability to boost
productivity and contribute to economies in in some of the ways
I described earlier. You know, there's also big.
Infrastructure investments that governments can make.
You know, there's a lot of talk right now about AI and energy.
(28:07):
And it is true that if we're going to do not just training at
scale, but inference at scale, you know, increasingly that's
the story of AI will be especially if we in fact drive
enough adoption, you know, inference use of compute is is
just going to go up. And so thinking through all the
(28:28):
ways that we can add to the energy grid, you know, certainly
we at Google have been thinking about that hard and making our
own investments so that in fact,it is additive.
But, you know, having a modern way of powering these systems is
going to be really essential. And I think governments around
the world need to be working. Hard on that, that set of
challenges. And then, you know, there's also
(28:51):
an entire set of questions around how do you regulate this
technology and how do you do? So in a pro innovation way, that
balances concern about risks with guaranteeing that you can
leverage all the benefits. So there's plenty of things that
governments can be doing now to really sort of shepherd adoption
in ways that are that are the most beneficial for their own
(29:13):
societies. You mentioned the labour
transition. I'm, I'm glad you brought that
up again, that that's one area where they're optimists and
pessimists in terms of what effect it will have, how
disruptive it'll be on the workforce, how permanent that
disruption will be. Again, I'm, I'm slightly on the
pessimistic side in the sense I,I, I just wonder what the value
of human labour will be in a world in which there are
(29:34):
machines that can do everything or almost everything more
capably and without getting tired and without wanting breaks
and without forming unions and all these sorts of things.
So yes, I think new work will becreated as we demand new
products and services. I mean, obviously there's no
reason there should be a ceilingon demand.
It should be infinite. But what I'm not convinced about
(29:56):
is why the jobs created to meet those demands necessarily have
to be filled by human beings. What are your thoughts on that?
So, you know, labour is one of those topics where we have to
have a lot of humility about what we know and what we don't
know and what we have empirical data for and what we don't.
(30:17):
And we just don't have a lot of empirical data right now, if
we're honest. And So what we do have is
historical precedent. And so if you accept the premise
that, again, AI is a general purpose technology that is akin
to past general purpose technologies, which have also
been very profound and have beendisruptive, but in the end,
(30:40):
humanity not only adapted to them, but in fact use them in
really beneficial ways, then youcan at least look to those
historical precedents. And you can think about, you
know, electricity. You can think about steam.
You can think about particular inventions like the car, like
the aeroplane. You can think about the Internet
(31:02):
itself and the disruption but also creation associated with
that invention. I can never remember the pure
statistic, but it is something like the overwhelming majority
of jobs on the market in the United States today.
Did not exist before 1940. Five.
And if you had asked someone in the United States in 1900 about,
(31:27):
you know, my job, for example, they would not have understood
what it even meant, right? They wouldn't have been able to
sort of contextualise or conceive of such a thing because
so many things have also happened alongside it.
So it's very hard for us, I think practically to predict.
But again, if history is a guide, then it is more likely
(31:49):
that AI is going to augment somejobs and create a lot of new
jobs and possibly make some jobsobsolete the way past
technologies have done with job categories in the past.
Another thing that a lot of labour economists focus on as
the technology is today is that it tends not to replace whole
(32:12):
job categories, because job categories are actually a large
collection of tasks. What it does tend to do is
automate tasks, and a lot of those tasks are the kinds of
things that make people's jobs really boring.
And in fact, people that can automate the boring stuff in
their jobs find a lot more meaning in their labour after
(32:35):
being able to adopt AI because it moves them up the value
chain. It means they can focus more of
their time on those meaningful things.
So one study that a lot of folkspoint to was of call centre
workers. And the really interesting thing
about that study was if you werea top performer in the call
(32:55):
centre, the AI assistant didn't help you that much, but if you
were a lower performer, it dramatically improved your
performance. So the other thing about AI that
we've observed is that again, itaugments human labour.
It actually upskills people in the same role, but it makes them
better at their job, which is a pretty exciting thing actually,
(33:19):
because it actually brings people up into more meaningful
work. And I think that's a huge part
of our, our conversation right now around labour, particularly
in industrial economies, is around the meaning of work and
finding things to do that are more meaningful.
And if AI is going to help with that, you know, I think that's a
thing we should embrace. But also, again, you know, track
(33:41):
really closely and, and, and gather that empirical data so
that we'll have a better sense of the direction this is
trending. I think that's a really
important point about, I suppose, being prepared to
broaden and redefine as necessary what we regard as
constituting valuable use of human time and endeavour.
(34:04):
And interestingly, I mean, if, if you went back 100 years and
told a farmer, for instance, that their great grandchild was
going to be a prompt engineer, not only would they not
understand it, but they would probably look at it with some
scorn saying, well, that's not, that's not work, you know,
that's not producing food, you know, so that's not using your
grit and your muscle and all the, I mean, I'm sorry, I'm
(34:27):
typecasting farmers here. But, you know, not only might we
not understand future use of human time and labour, but we
might almost look at it from a judgmental point of view that
it's not, you know, valuable or it's not a justifiable use of
human time. So I, I think it's a really
important idea that we're going to have to get used pretty
(34:50):
quickly to the idea of reorientating the way we see
human effort and, and maybe evenstart to move away from some of
the economic models that we haveat the moment about, you know,
simply, you know, salaries in return for labour and, and start
to even think about, you know, other ways of recognising the
(35:10):
value of what people do with their lives.
Does that make any sense? Yeah.
I also think, you know, it is unlikely that the pace of change
is going to be uniform across all sectors of the economy,
right, that that's unlikely. What's much more likely is that
particular sectors will have faster uptake of AI than other
(35:31):
sectors do. Again, as AI itself evolves, as
we sort of invent better or different models and systems,
there will be increasing use cases for it.
But the context matters so much in AI deployment that I think
that's the other thing that tends to get a little bit lost
(35:51):
in some of the discussion right now about labour and jobs is,
well, it sort of depends on the the sector you're talking about
or the category of job, you know, how AI as we have it today
is affecting that category, you know, and what we might be able
to foresee in the future. So yeah, I just think.
(36:12):
There is a lot that. We are again back to my theme
about. There's just a lot of.
Points of human decision, and a lot of points where things
aren't going to be necessarily or naturally faster than we are
able to think them through. I think everybody needs to take
the message that it really is upto us how we implement policy
(36:35):
over the next 1020, fifty years to, to make this work for us
because it should be the best thing that's ever happened to
us. And it shouldn't.
I mean, it will, yes, it will bedisruptive, but we should be
able to manage that disruption. And so let's let's all please
agree not to screw it up. I think that's hopefully one of
the messages that when you and I, yeah, that we drive home to
(36:57):
people. So let's just talk about
regulation then. And you've touched on it
already. This is another one where it
seems to me, and again, I'm coming from that different
premise that that we're talking about a categorically different
technology here. But it seems to me that we don't
have great historical paradigms to work with when it comes to
regulation. So I'm thinking about, you know,
templates for legislation, general approaches that we have.
(37:19):
Intended to take. In the past, it feels to me like
there aren't existing models that we can just sort of copy
and apply to something that is as wildly different as I regard
AI as being. One way to to ask you the
question, I suppose most directly is what do you see as
the main basic principles of regulation that a generic
(37:42):
country should be following and,and do do you give any credence
to what I was just saying about the idea that, you know, maybe
we have to come up with completely new paradigms?
Yeah, let me take those in order.
So again, I think because it is a general purpose technology and
will be integrated in diverse ways across sectors, the way to
(38:05):
approach thinking about regulation of AI is not to think
about one law that will work to regulate AI across all of those
different use cases. The far wiser course is to start
where most governments already are with that sectoral
regulation. And the benefit of doing that is
(38:27):
that sectoral regulators are already quite steeped in that
sector in those contexts. So the example I always use is,
you know, in medicine, there areusually, there's usually a
regulator for pharmaceuticals and that that community knows
vastly different things than theregulators that do aviation.
(38:48):
You see AI integrated in both places.
You see AI integrated into drug discovery right now.
You can see AI sort of an avionics systems, right, But
totally different use cases, totally different risk scenarios
and risk surfaces. You know, both of those are also
can be high risk activities, butalso very high reward.
(39:11):
And so that surfaces a lot of things about why you want to go
sector by sector with AI and start with your existing
sectoral regulation and go through a process where you you
examine that regulation and AI and try to determine if there
are gaps in what you already have.
(39:31):
And, you know, it's my view, both professionally and
personally, that most regulationthat exists in these sectors
tends to cover AI already. And if there is something novel
about AI, then indeed you need new regulation for that.
(39:52):
But it's important to discover that first, something that we
say at Google a lot is if it's illegal without AI, it's still
illegal with AI. If it's regulated without it,
it's still regulated with it. And those sectoral regulators
are going to have the expertise to understand, well, how might
we need to adapt to this? Now, what is often helpful for
(40:13):
governments is to have some kindof hub of expertise in the
technology of AI to support those sectoral regulators.
We sort of traditionally called that the hub and spoke approach,
but that, you know, I'm a formerpolicy maker myself.
That strikes me as the most pragmatic way to to go about it.
You don't have to sort of reinvent government.
(40:34):
You don't have to invent a largenew department.
If you already have a Ministry of Technology, certainly that's
a good place for that AI expertise to reside in your
government. I mean, if you have a Minister
of Health, that Minister of Health should still be able to
adapt their regulations to AI. So I think that's like that's a
good starting point. Another really important thing
(40:55):
that we like to point regulatorsat is the evolving landscape of
AI standards. You know, the international
standards organisation, standardisation organisation has
a robust agenda on developing AIstandards.
And the more the entire ecosystem, policymakers, you
know, industry players, civil society can be working off the
(41:18):
basis of those technical standards for AI, the more we'll
be able to create interoperability across the
globe on how we understand safety, how we understand
performance, how we understand management of this technology.
And the more interoperable it is, the fewer gaps and seams
there will be, which is a reallygood thing for AI safety as
(41:40):
well. You don't want an uneven safety
standard across the world. So ISO is well on its way.
Again, it's got that robust agenda.
They've already promulgated several AI standards.
They've got a lot more in the pipeline.
So we're also always pointing tothat process and encouraging
governments to think through ways for their the local
(42:01):
regulations to really take on board that what's in those ISO
standards. It's a really interesting, it
feels almost like a reversal in 2025 where you actually have
international global standards and, and well, not governance,
but certainly standards almost kind of leading the way for
individual nations that that feels so.
(42:23):
But you feel that the the ISO isin a place where it's done
enough work that individual countries can actually look to
that global picture and say, OK,that's we can take our cues from
that. Yeah, and there's still more to
go again, that they have a pipeline of, of things that
they're working on. But you know, the, the great
thing about standards bodies is that standards are developed by
(42:45):
technical experts. So it, it really is about sort
of, you know, the standard measures for these technologies,
standards for how to build thesetechnologies securely, standards
for deployment, standards for management.
The first ISO standard was 42,000 and one which is about
(43:05):
how organisations can manage AI.So yes, I, I think that is a
very healthy direction for governments to look in for a lot
of specifics about, you know, how can we use AI, approach AI,
measure AI again, in a way that's consistent with how
others across the world are measuring it as well.
(43:25):
OK, All right. Now I've taken this down so many
rabbit holes and apologies for that, but we haven't covered
everything that I wanted to cover, but I'm conscious of
time. But there's one, one more thing
and it's almost the flip side towhat you were just talking
about, and that is this questionof sovereign AI and the value of
sovereign AI, which countries are understandably looking at
given the geopolitical picture that they're facing.
I mean, I always think of South Korea as a very interesting
(43:47):
example. At the moment it's building its
own sovereign large language models.
Language is an issue that makes sense for the Koreans because
they, you know, if they, they can train and build their their
models around the Korean language, which has advantages.
But then you look at a country like Australia, which we share a
language with the United States and therefore that's not one of
(44:08):
the barriers, but we are nonetheless.
Well, there are a couple of projects being proposed or at
least under development of firstsovereign Australian models.
They're not going to rival Gemini or GPT 5 or Claude or
anything like that. But you know, they will be our
own. They they can be trained on some
(44:29):
of our own data and therefore kind of refined and and fine
tuned in a way that works for our, for the purposes that we
would like. Tell me from your point of view,
what is the the value of sovereign AI models, you know,
in economic terms, in strategic terms, given geopolitics again,
maybe even cultural? Terms as well.
And is it worth the effort for acountry like Australia to, you
(44:52):
know, to pursue that perhaps in a, in a public private
partnership kind of way? Yeah.
I think the important thing to think about when you're thinking
about sovereignty and AI is really about, you know, do
countries have a short control over the way AI is used
domestically, over the data and information that is sovereign
(45:16):
and also over do they have a resilient system?
And I think what I would cautionis not to think of all of those
things as having to be purely territorially domestically
assured, particularly that resilience piece.
Our own Google Cloud assures resilience in a variety of ways.
(45:39):
We do offer national solutions for customers that prefer it,
but a lot of sort of cloud resilience has to do with being
able to distribute in different locations and being able to have
some redundancy in those locations.
And so, you know, I have a background in international
(46:00):
relations. Sovereignty and territoriality
are very linked concepts. And I think when you're thinking
about AI, you have to think a little bit more about, you know,
not kind of whose flag is in theserver, but who has the keys to
it and who has the assured control over the information and
the data that's in it. That's the key question, not so
(46:23):
much sort of the geolocation perSE, because again, you're going
to want that resilience. And that redundancy built into
the system. So I think that's the kind of
question that countries leaders should ask themselves as they're
thinking about sovereign AI solutions.
Language concerns are sort of building an LLM that really sort
(46:43):
of reflects a country's uniqueness.
I think is, you know, a reasonable aspiration and a fair
critique. But you yourself pointed out,
you know, with many foundationalmodels, there's a lot you can do
to fine tune them on information.
So I think there's just a lot ofways to assure sovereign AI
control that just require a little bit of shifting of that
(47:06):
traditional sovereignty mindset to make sure that control and
that resilience stays with with a particular country.
All right, Alice, look, as you can probably guess, I can talk
about this all day, but I think.We've covered a lot.
Let's catch up sometime soon because there are 1,000,000
questions that I wanted to ask but haven't had time to.
But look, you've been really generous, Alice.
(47:26):
It's been great to talk to you. Thanks so much for coming on
the. Podcast, thanks so much for
having me. That's it for today, folks.
Please join us again next week for another episode of Stop the
World.