All Episodes

April 2, 2025 • 34 mins

On this week’s episode of The Business of Tech, Sarah Box, a digital policy specialist at the Ministry of Business Innovation and Employment, shares insights from her three-month Harkness Fellowship in the United States examining AI policy direction.

While Trump's first term focused on innovation and capability development, Biden pivoted toward trust and safety. So what does Trump's vision for AI look like, and how best can New Zealand steer its own path on AI policy and regulation? Find out on The Business of Tech, streaming on iHeartRadio or wherever you get your podcasts.

See omnystudio.com/listener for privacy information.

Mark as Played
Transcript

Episode Transcript

Available transcripts are automatically generated. Complete accuracy is not guaranteed.
Speaker 1 (00:03):
This week on the Business of Tech powered by two
Degrees Business Artificial Intelligence. Under the Trump administration, it's all
about sustaining and enhancing America's global AI dominance, according to Trump,
who on gaining office for a second time, revoked President
Biden's Executive Order on AI. That one emphasized safe, secure

(00:24):
and trustworthy developments and use of artificial intelligence. But Trump's
administration did quite a lot on AI in his first term.
Before chat GPT burst onto the scene and made it
a real mainstream issue. The US led multilateral cooperation on
many AI developments. So what can we expect from a

(00:44):
second Trump presidency, especially as new less resource intensive large
language models like deep sea copair to be changing the
economics of intensive AI applications and the geopolitical implications. Joining
me on episode ninety one of The Business of Tech
is Sarah Box, a Ministry of Business, Innovation and Employment

(01:05):
digital policy specialist who's one of the people in government
at the moment really focused on our approach to AI
as a nation. Sarah Box, Welcome to the Business of Tech.
Thanks so much for coming on now. You spent around
three months late last year in the United States as

(01:26):
a Harkness fellow. What a great opportunity. I was one
myself over a decade ago, looking at the future of
public interest journalism. I went to pro publica New York,
the Center for Public Integrity in Washington, DC. You know,
it was a fantastic experience. Your fellowship looked pretty exciting.
Three months examining AI policy direction in the US. Who

(01:49):
are you hosted by over there, Sarah?

Speaker 2 (01:51):
That's right. Yeah.

Speaker 3 (01:51):
So I organized a three months stay based in Washington, DC,
and I had the Observer Research Foundation America host me.

Speaker 2 (02:00):
So that's a relatively.

Speaker 3 (02:01):
Small think tank in DC, but they're affiliated with their
Indian counterpart, Observer Research Foundation, and so really good connections
that they have into folks with cybersecurity, AI, semiconductor, a
lot of related kind of policies to AI. So it
was a great place to be for that three months.
I also had a bit of time in San Francisco, Atlanta,

(02:22):
and a few other places, so I got to travel
around a little bit.

Speaker 4 (02:25):
Yeah.

Speaker 1 (02:25):
The Observer Research Foundation is a great organization, does a
lot of really good research and policy advisories. And we'll
put a link into show notes so that people can
access that. But boy, what a time for you to
be there obviously through the election campaign and then into
the election itself in the US, Trump winning by a
decent margin, coming home in December before he got a

(02:49):
chance really to enact DOGE and all the policies and
executive orders that he's issued. Walk us through the approach
of the Trump administration to AI, because it goes back
towards first term and a lot of the regulations and
philosophy around AI that is still in place today.

Speaker 2 (03:07):
Yeah, that's right.

Speaker 3 (03:08):
So I think that at the time that the first
Trump administration came in, that was when people started to
realize what applications AI really could lend itself to, and
so there was a lot more interest then in supporting
AI technology and then starting to think a little bit
about AI governance, and so in a way that the
first Trump administration really did set down some of these

(03:30):
basic policies as you say, that still are there today,
particularly related to R and D capabilities, personnel capabilities.

Speaker 2 (03:38):
And so on.

Speaker 3 (03:39):
And I think really at the time that the first
Trump administration was in, the thinking was very much around
how can we get the best out of AI? And
when Biden came in, it was almost like shifting to
thinking the worst of the technology. It's a little bit blunt,
I think, but that's sort of how I see the
general mood switch between the first Trump administration and the
change into Biden.

Speaker 1 (04:00):
Which from as you put it in your report, are
focus by Trump and its first term on innovation and
capability development around AI to a pivot to trust and
safety under Biden, and his executive order was quite far reaching.
For instance, I think he wanted the likes of open
AI to basically show the government what they were doing.

(04:20):
Invite government officials in and say, this is what we're developing.
It's cutting edge technology. Here it is, here's the source code,
here are our models. So you can establish for yourself
as the government whether you think it's safe. Quite far reaching, yeah,
it was.

Speaker 3 (04:35):
And if you think back to sort of that switch,
I mean back under the Trump administration, they really put
an emphasis on American leadership, and his AI Executive Order
was looking to prioritize federal R and D research. It
was about boosting compute and data resources and building up
the workforce.

Speaker 2 (04:53):
And so one of the concrete policies they had, for example, was.

Speaker 3 (04:57):
Creating twenty five AI institutes that were based in US universities.
They were focusing on different aspects of AI research, you know,
depending on the industries and whatever state they were in,
from agriculture to environmental science and building up talent.

Speaker 2 (05:14):
They also updated.

Speaker 3 (05:15):
The AI R and D Strategic Plan. They were out
there internationally. They championed the OCDAI Principles. They were very
active in the G seven. They supported the Canadians, for
instance in twenty eighteen when they came out with the
Chavois Common Vision for AI, which is all about grasping
the opportunities supporting entrepreneurship. And then you had Biden come

(05:40):
in and the first I guess notable policy there was
his blueprint for an AI Bill of Rights, which was
all about focusing on the risk of bias, discrimination and equities,
threats to privacy and so on. And then, as you say,
you had the Executive Order come in with a raft
of actions around AI governance, creating AI Safety Institute, requiring

(06:03):
reporting on these dual use models and so on.

Speaker 2 (06:06):
So yeah, really, whilst I think Biden did.

Speaker 3 (06:10):
Have some policies around capability innovation, there was a definite
switch at that.

Speaker 1 (06:14):
Time, I guess during Biden's administration, that's when we saw
the launch of chat GPT, generative AI exploded into the
public consciousness, and you had the godfather of AI, Jeffrey
Hinton and others, basically saying, we think this is an
existential threat to society, artificial general intelligence. So he was

(06:35):
reading the signs Biden and going, we need to do
something about this.

Speaker 2 (06:40):
I reckon.

Speaker 3 (06:40):
This is going to be really fascinating to watch because
while in theory all of the actions under Biden's executive
Order are actually under review right now and could be terminated,
there are areas of common interest that you'd expect the
Trump administration could support, And the question is where the
optics of keeping something from the biden erar is just

(07:01):
it won't play with the Republican constituency. So, for instance,
Biden's executive order required annual risk assessments of AI in
critical infrastructure. You think maybe that might have bipartisan support.
That increased resources like data to startups and small business
trying to get competition, that asked the National Science Foundation

(07:23):
to reorient some of their existing funding around AI training,
trying to get more workforce development, and it all seems,
at least to me, to be consistent with Trump's goal
of American leadership, and you would hope that some of
that policy work might actually survive.

Speaker 1 (07:39):
We've seen a lot of deal making. The Stargate project
launched the day after Tramp's inauguration with Oracle Open AI
and SoftBank putting five hundred billion into AI infrastructure. So
he's very much looking at it through the lens of investment.
But we've also seen the last couple of months a
lot of pulling back from multilateral arrange on a range

(08:01):
of issues. As you pointed out the OECD principles which
we as a nation have signed up to that happened
under the first Trump administration. What's your sense about how
much of a global consensus model Trump is willing to
pursue on AI. You know, we had the Bletchley Park Declaration.
The US is still part of that, so they're sort

(08:22):
of still in the camp looking to other countries to
collaborate on this to some extent.

Speaker 3 (08:27):
Anyway, Yeah, yeah, So my first observation was that, you know,
Trump has nominated the same team that he had in
his first term, which you'd hope might bode well for
international engagement. So Michael Kratzios, he was the Chief Technology
Officer under Trump Mark one. He's now set to be
the Director of the Office of Science Technology Policy, which

(08:50):
leads out on AI. He was actually Michael Kratzios was
a key player in getting those OECD principles over the line.
He was ably assisted by Lynn Parker, who has also returned.
She's going to head up the President's Council of Advisors
for Science and Technology. Lynn was also at the G
twenty and I think, you know, both of them were
quite pragmatic and constructive operators in international engagements, and hopefully

(09:15):
that will continue. Having said that, of course, you know,
I think US engagement's probably going to become a bit
more muscular, if you like. We saw that at the
Paris AI Action Summit, where Vice President Advance was I
think crystal clear that the US was not going to
tolerate its firms being constrained by anti innovation policies put

(09:37):
in place by other countries. I think you're also going
to see the Trump administration making greater use of trade
policy and industrial policy to take forward their AI objectives,
and of course that's going to have trickle down effects
to other countries too. So yeah, it is going to
be interesting to watch this one player.

Speaker 1 (09:53):
Yeah, vance in Paris. That was one of many provocative
speeches that the Vice President gave when he went Europe recently,
very critical of how the Europeans do things. Does he
make a good point though on AI? US leadership in
AI really has been down to that sort of permissionless
innovation to some extent, free of really tight regulation like

(10:15):
GDPR and the AI Act that the EU has now introduced.

Speaker 3 (10:19):
Certainly for the firms operating in the US. So I've
been interested to read over the last a little a
little while about the submissions coming from AI firms into
OSTP regarding the AI Action Plan that they want to
put out by midyear. And so one of those submissions

(10:39):
I think it was open AI really playing the China
card and saying, look, if you want leadership, that's fine,
but you need to look after us. And essentially what
that means is pretty low regulation or no regulation on
the AI firms and sort of holding that over them,
I guess is a bit of a threat in terms
of the says that they put in place.

Speaker 1 (11:01):
We've seen the arrival of deep Seek that was a
big moment for the AI industry, and tied into that
was a debate about what AI chips, which the US
is limited.

Speaker 2 (11:11):
Yeah, that's right.

Speaker 3 (11:12):
I mean, I think the deep Seak innovation took everyone
a little bit by surprise. They've shown that you can
train models at a far lower cost. It also, I
think shows you that when you keep a technology from
a country, as you know they were trying to do
with China and these more advanced chips, it just gives

(11:33):
the imperative or the impulse to innovate around it. And
so that's what they've done. And I think the emphasis
that China is now putting on some of these smaller,
faster models is actually quite compelling for small countries and
countries in the Global South who don't actually have the
resources of the US. So China's putting itself in a
pretty interesting position as well.

Speaker 1 (11:54):
Pretty exciting for US as a small developed nation as
well to see Deep Seek and more recently Mannas Model
a marriage out of China. They suggest that we could
actually do quite a lot with models that require much
less hardware capacity.

Speaker 3 (12:09):
Yeah, for sure, I think that's where you can see
some opportunities coming. We're never going to have the resources
of the US obviously for our investment, but we do
have unique sources of data. We've got smart people in
New Zealand, we can build some of these models at
a smaller scale in China show that it's possible.

Speaker 4 (12:31):
Yeah.

Speaker 1 (12:32):
So the geopolitical tension, I think, as you said, there
by restricting that and we've seen this in semiconductors in
general mobile technology restricting it is actually forcing your opponent
to innovate, which has been quite good for the Chinese.
But how do you see that playing out in terms
of how it affects the AI that US, as citizens

(12:55):
and consumers used. Do you see this sort of bipolarization
of technology two different camps and we experience our view
of AI chatbots based on Western or American technology, the
Chinese and the developing world have a different view.

Speaker 2 (13:13):
Oh, great question.

Speaker 3 (13:15):
You would hope that we don't see the world split
into two different camps of technology experience, and I think
the multinational players coming out of the US are keen
to maintain their global markets.

Speaker 2 (13:26):
What I would say is that.

Speaker 3 (13:29):
I don't think that the US would take kindly to
its allies supporting China too closely on technologies like AI,
and that probably includes things like procurement of any Chinese technology,
and it could go to the extent of, you know,
what sort of consumer products we're bringing in as well.

(13:51):
And I think it's worth remembering that back under the
first Trump administration, they banned the Chinese telecommunications company Huawei
from buying certain US tech without special approval and basically
barred its equipment from US telco networks on national security grounds.
So there's always going to be this dance between the

(14:12):
national security and the civilian kind of applications of AI.

Speaker 1 (14:16):
I think one of the things you looked at when
you're in the US obviously federal policy, but the states
of the US.

Speaker 4 (14:23):
Are also regulating around AI, and it's.

Speaker 1 (14:26):
This sort of patchwork emerging of policies and regulations across
the state's. California has implemented, some have decided not to implement,
others that make curtail the AI industry, which is very
much centered in California. What was your sense about what's
going on there, what's driving activity at the state level.

Speaker 3 (14:47):
Yeah, So something I hadn't fully appreciated, I think before
I went to the US is that you do have
AI policy coming not just from the executive branch of
the government, but also from Congress and also from the
state level politicians, and so at the end of last year,
there was something like more than seven hundred I think
AI bills under discussion across uh US states, And in

(15:10):
many instances, those bills were using slightly different definitions. I
got different scopes, I got different thresholds for risk and
and all of these sort of nuances.

Speaker 2 (15:21):
A fair number of them were about.

Speaker 3 (15:23):
Consequential decision making, so trying to ensure that government agencies
themselves weren't introducing bias or of discrimination when they're making
decisions about education, housing, and so forth. But you mentioned
the California bill, and that was one more angle towards
the technology itself. Out of those seven hundred bills, there's

(15:44):
really only a handful literally that.

Speaker 2 (15:46):
Have been passed.

Speaker 3 (15:47):
One of those was the Colorado Bill, which was focused
on AI systems and consequential decision making. It's not come
into force, it's due to come into force in twenty
twenty six, but already it's under review to reduce the
risk that it deters innovation and investment in Colorado. They
had something like two hundred firms petitioning the government just

(16:08):
a couple of weeks after it was passed, saying no
This is way too broad and too vague. This is
going to destroy the industry here in Colorado. Firms that
I spoke to in the US said, look, a state
by state approach really is costly for small firms. Big
tech can afford it, they can lawyer up. Small ones can't.
And I've seen in a number of submissions to OSTP

(16:29):
about this AI Action Plan that the federal government should
think about passing light touch legislation that will actually prempt
all of these state laws. And I think what that
is is a judgment that the regional flexibility that you
might have and the opportunity to experiment just isn't giving
you the benefits to outweigh the cost of the patchwork

(16:53):
if you like.

Speaker 4 (16:54):
And I guess that is our advantage with I system here.

Speaker 1 (16:58):
Sure, and things like Florida, if the water supply it
might be done by regional local councils, But when it
comes to something like AI, you know, we get guidance
from the government, and the government has said it wants
to take a proportionate and light touch approach to regulation
on AI. So you're not running into a scenario where

(17:19):
one part of the country is saying no, you have
to process data in this particular way and we need
to see your models and we're sort of that's good
from an innovation point.

Speaker 2 (17:28):
Of view, absolutely.

Speaker 3 (17:29):
I mean it would be ridiculous for New Zealand to
sort of split into different ways of doing things. And
I think this shows the importance of us being engaged
in international dialogues on AI. So you know, there are
guidelines as international standards being made out there and New
Zealand needs to have a seat at the table and

(17:50):
have a voice so that we can express, you know,
what are our interests and try and shape some of
those global discussions around AI.

Speaker 2 (17:59):
Think keeping those international.

Speaker 3 (18:02):
Connections as well helps us tap into some AI resources
that we might not otherwise have, so really important, I
think to keep reaching out.

Speaker 1 (18:11):
You're highlighting the report energy infrastructure, which really has become
a talking point in relation to AI. We have data
center as being built here, which is great. You know,
there's a lot of productivity that can come from AI
that likes a WS and Microsoft, but energy is a
sticking point in some countries. The AI chips use a

(18:33):
lot of energy to create, train these models and operate
them as well. What was the feeling you got there
about where this is going to go and what we
can learn from how to approach the energy equation.

Speaker 3 (18:47):
Yeah, well, the US, I think has a real problem
with aging energy infrastructure. They've got a problem with transmission
and apparently real snarl ups with consenting and permitting. A
study by Goldman Sachs last year that suggested the US
needed fifty billion dollars of investment in new generation capacity

(19:07):
just for data centers. You saw last year some of
the tech firms starting to take action for themselves. Some
were working with energy companies on nuclear options, particularly these
small modular reactors that you can build pretty quick and
close to the grid as a way of pursuing carbon neutral,
carbon free goals while also getting the energy that they need.

(19:30):
You mentioned Stargate earlier. You know, that's five hundred billions,
mind boggling number of investment in AI and data centers
and US locations. President Trump's also smoothing the path I
think for energy production. And you've seen recent announcements coming
out suggesting that there may be new sources of energy
coming from Alaskan resource sort of developments and offshore drilling.

(19:54):
And I read something also about LNG. So there's a
lot going on around energy in the US at the moment.
Back here in n Z, I think the most recent
assessment from Transpower, which came out in the middle of
last year, suggested that demand from data centers wasn't actually
posing a risk to security of supply for US in

(20:15):
the next ten ten years or so. We've got a
pretty green electricity grid already. I think it's over eighty
five percent renewables. You've got firms like Microsoft signing deals
with energy providers here for renewable energy, which can help
fund new generation. I think it's going to be interesting
to watch the government's new investment drive, you know, engaging

(20:38):
with foreign investors to build infrastructure so on see whether
that well, some of that will flow into electricity and generation.
The question for me though, around all this energy debate
is well, what is the capacity for So if New
Zealand is not a hub for AI training, if models
are getting smaller and less computed intensive, then what is

(21:00):
actually the core on energy that we need to meet?

Speaker 2 (21:03):
And that's for me what the interesting question is.

Speaker 1 (21:05):
That is very interesting because you know, when I talk
to the likes of Vanessa Sair incident Microsoft, I said,
you know what is actually in these data centers, are
you loading them up with AI chips to do model training?
She said no, No, it's actually the traditional stuff you'd expect, hosting,
data processing, data applications, all of the sorts of services that.

Speaker 4 (21:28):
Are moving to the cloud.

Speaker 1 (21:30):
It's actually not a big component of the technology in
those New Zealand data centers. That may change over time,
and you have the likes of Data Grid that have
a plan to establish AI centric data centers in Southland
because it's efficient to cool.

Speaker 4 (21:43):
Data centers down there.

Speaker 1 (21:45):
But from what I've seen, they're all buying assurances of
megawatts of capacity and that doesn't seem to be wearing
the electricity sector. It suggests that for the next decade anyway,
there's enough supply to meet their demands.

Speaker 3 (21:58):
It seems so from from what I can see, the
market seems to be looking after itself at the moment,
which is a good thing. And as long as New
Zealanders are able to continue to power their homes at
the same time as they can power new innovations and
new business models and so on, then I think we're
in a good spot.

Speaker 4 (22:17):
Yeah.

Speaker 1 (22:18):
One of the things you would have noticed in the
campaign over there is the sort of the anti woke
movement that is part of the MAGA movements and all
of that.

Speaker 4 (22:27):
And I guess there.

Speaker 1 (22:28):
Are some genuine concerns about generative AI, in particular what
content it's drawing on and what slanted and bias it
puts on information. Was there anything you picked up there
from your discussions with experts about how we deal with
this issue off trying to give people factual information, but

(22:49):
dealing with that bias, that potential political or ideological leaning,
and information that is going to color a lot of
these large language models.

Speaker 2 (22:58):
Yeah.

Speaker 3 (22:59):
So one of the critics from Republicans is that AI
safety standards are operating kind of as this woke policy
that advances diversity or it stifles free speech and is
ideologically driven rather than being innovation and competitiveness driven. For me,
I think if we can try and be clearer about

(23:20):
what we mean about some of these terms, that would
be great. So for me, AI safety has become this buzzphrase.
It's really ill defined, means anything people want it to mean. Instead,
if we can talk about, for example, physical safety risks
where AI is interacting with the physical world, whether it's
in critical infrastructure or smart city applications, or health hospitality.

(23:41):
You know, you can get a hold of that and
think about, Okay, what's the evidence based technical solutions to that.

Speaker 2 (23:46):
It's very straightforward, nothing woke about that.

Speaker 3 (23:49):
But when you try and get into describing potential job
loss as an AI safety issue, you're just really complicating matters.
And I think we can steer away from these politically
divisive approaches just with being a bit clearer about what
we need.

Speaker 1 (24:05):
So you've come back from the US here, we have
a few things going on around AI policy. We've got
the recent guidelines on the Use of AI in the
public Sector that was released, We've got the National AI
Strategy in development. But have you got any sort of
real takeaways when you came back on the plane where

(24:26):
you thought, wow, there's like two or three things here
that we could really apply in New Zealand to really
good effect when it comes to trustworthy AI, but also
enabling our own companies to really innovate in this space.

Speaker 2 (24:40):
Yeah.

Speaker 3 (24:41):
So, one thing I really appreciated about the US is
the long term thinking, and so you can see that
for many, many years they have invested in technology R
and D, and that has put them in good stead
for building an AI ecosystem, and I'd like to see
that sort of bipartisan long term thinking really rooted in

(25:05):
here in New Zealand as well.

Speaker 2 (25:07):
I think the government here has already.

Speaker 3 (25:09):
Adopted one key US policy tenant, and that's this innovation
friendly approach, which is seeking to harness the opportunities of AI.
I think AI really is a technology that needs to
be normalized. It's a tool that can help us make
better decisions and be more productive, make our.

Speaker 2 (25:25):
Resources go further.

Speaker 3 (25:28):
You know, New Zealand's been in the productivity at oldrooms
for decades and we can't afford not to adopt AI.
So this innovation friendly stance I think is a good
way forward. I quite liked the regulatory advice that came
out under the first Trump administration, which was still i
guess in place under Biden, that set out good regulatory

(25:51):
practice principles like leveraging scientific and technical information, pursuing flexible
and tech neutral approaches, which really I think helps you
focus your limited public sector resources, policy maker resources on
issues where there's evidence of a problem, or there's evidence
that there's significant ambiguity that firms don't know what's what

(26:11):
in the regulatory space, and also where you have an
intent and ability to follow through on implementing new laws
or regulations. I did and do like the risk management
framework that came out of the National Institute Standards and Technology.
That's a really good example, I think of voluntary guidelines.

(26:32):
It's been very influential in other countries, and there was
actually growing momentum in the US for that to be
a safe harbor. So, for instance, if you had a
piece of legislation that was requiring certain actions around AI governance,
and as a firm, you'd implemented this risk management framework,
then you'd be deemed to comply. And that framework gives

(26:52):
just pretty practical tips to organizations on how to augmentor
or sharpen their existing practices around risk management and assurance,
isn't it. I think it's a really nice piece of
work that they're done. Maybe finally just to say, I
think that the US commitment to engaging internationally is something
I found compelling as well. Obviously, you know, and everything

(27:16):
they do, they have many more resources than us, and
we can never hope to be involved as much as
they are, but I just appreciated that they try to
take a leadership role, They try to engage and they
try to seek consensus with like minded countries, or at
least they did.

Speaker 1 (27:31):
Yeah, Well, hopefully that will continue, because that is the fear,
is that it'll be more of a unilateral approach to
some of these things and.

Speaker 4 (27:41):
More inward looking. So hopefully that.

Speaker 1 (27:43):
Great work they've done they will build on. Just finally,
for us in New Zealand, we have been when it
comes to technology, a bit of a technology taker, you know,
and for this wave of AI that is particularly true.
I think a lot of our companies and consumers. Are
you using Chat, GPT, Gemini Copilot?

Speaker 4 (28:04):
You know?

Speaker 1 (28:04):
These models developed offshore don't necessarily reflect our culture and language,
and you notice that when you use them, it's a
very American centric view off the world. Know, what can
and what should we be doing to find our own
way and our own.

Speaker 4 (28:20):
Uses off AI?

Speaker 1 (28:22):
And at a fundamental level as well, what should we
be doing to build our own intellectual property in this space.
We've got the new public research organization coming that has
AI and its REMIT, so I guess there's an opportunity
there to do something a little bit more fundamental that's
going to hopefully add value to our economy. Specifically and

(28:42):
give us an edge competitively internationally.

Speaker 2 (28:46):
Yeah.

Speaker 3 (28:47):
So the way I think about it is that we
really need to maintain a focus on three blocks of
activity if you like. One is that AI development side,
one is the AI adoption and application, and then just
general AI seaviiness if you like. I don't think that
just because we're a small country and that we're normally

(29:07):
a tech taker, that we shouldn't have some competencies and
capabilities around developing AI. And you see that we do
have some groups. There's a group up in Northland, for example,
that's been using AI.

Speaker 2 (29:22):
To establish a database of today or.

Speaker 3 (29:26):
Commanity, so you know there's really neat work like that
that's going on. We have probably some amazing data sets
around I don't know, seismic activity, oceanic activity and so on,
and it's a matter of maybe using some of that
data better and if there's a way that this new
public research organization can contribute to that, that would be great.

(29:47):
I think that's still very much in early stages at
the moment of thinking about where that organization will go
and what its purpose will be, but really hopeful that
that will be able to step up and fill a
gap in our policy architecture if you like. So, Yeah,
definitely we need some capabilities around developing AI because that
means that we can take our own data, our own languages,

(30:09):
our own sort of culture and reflect that in the
AI that we have. Then, in terms of AI application,
we've got smart businesses out there already developing applications, new
business models with AI. I think we've really got to
try and turbo charge that. And finally in that third bucket,
just really putting an emphasis on how can people in

(30:30):
New Zealand.

Speaker 2 (30:31):
CAI is just a normal part of the technology suite.

Speaker 3 (30:34):
It's there every day, the taught in school how to
use it, the torta to think critically about it, and
it's just part of the fabric of society and what
we do.

Speaker 1 (30:45):
And just to finish off, Sarah, where we at on
the sort of the timeline of things that government is
working on. Can you give us just a quick overview
of the things that are in train and maybe the
timeline on things like the AI strategy.

Speaker 3 (30:59):
Yeah, certainly speak to the elements of work that my
ministry has been supporting the government on. So you heard
I think I think it was October last year, Minister
Collins announced that there would be an AI strategy and
that it would be consulted on this year. So we've
been supporting now Minister Retti on developing that product and

(31:21):
hoping that that will be out, you know, in the
next little while. I can't give an exact date on that,
but you would have seen that the AI strategy has
actually been featured in government's Going for Growth plans as
part of its Innovation Science Tech pillar. So definitely the
government recognizes that this technology is important and we need

(31:41):
to be thinking strategically about it. The second deliverable or
product that my ministry has been helping the government on
is around a responsible AI guidance for firms or for businesses,
and that is taking a leaf I guess out of
products like the Risk Management framework in the US, also
similar products coming out of Singapore and other countries where

(32:04):
we're trying to give voluntary guidance to firms on how
they can think about the processes internally for having responsible
AI and developing good products and so on. Again, not
totally sure on the timeline of that, but hopefully pretty soon.
And I think that those two pieces of work will
be a really great addition to the policy architecture here

(32:25):
in NZ and our colleagues in the Tirement of Internal
Affairs Government Chief Digital Officer team. They're doing great work
on the public sector side as well, and as you mentioned,
you frameworks and other things coming out there.

Speaker 4 (32:38):
Oh good, Well, there's a lot going on.

Speaker 1 (32:40):
And what's your reflection on the experience of doing that fellowship.
I know I spent three or four months in the
US and it really did give me a whole new perspective.
Would you recommend it for people like yourself in government
policymakers who want.

Speaker 4 (32:56):
To get that perspective?

Speaker 1 (32:57):
Is it still relevant the US perspective on how to
do policy given the radical change that's going on?

Speaker 2 (33:03):
Yeah, I did wonder.

Speaker 3 (33:04):
You know, my fellowship spanned precisely half before and half
after the election. I thought, gosh, is everything that I'm
learning just completely irrelevant? And I would say no, it
was a really great experience steffing out of your day
job and having this self guided period of research in
a country that is, you know, it's pretty different to us.

(33:25):
We watch American TV shows, we listened to American music,
but being there and living there is a totally different thing,
and it was a real privilege to be able to
do that, a privilege to speak to some of the
experts there in DC and other places about their experience
with aopolicy where they thought aipolicy in the US was going,
and to establish a bit of a network with those

(33:47):
people too, which I hope you know I can maintain
for the future.

Speaker 4 (33:51):
Good well, we'll put a link to your report.

Speaker 1 (33:53):
Very good report as summarizing all of your experiences.

Speaker 4 (33:56):
We'll link to that in the show notes. Thanks so
much for coming on.

Speaker 1 (34:00):
Good luck for the road ahead helping inform AI policy
in New Zealand.

Speaker 2 (34:04):
Thanks so much, Peter, great to be here.

Speaker 1 (34:13):
Thanks so much to Sarah Box for coming on. You'll
find a link to a report on her fellowship visit
to the US in the show notes. Go to the
podcast section at businesses dot co dot nz. Stream the
Business of Tech podcast on iHeartRadio or wherever you get
your podcasts. That's dropping next Tuesday, and I'll catch you then.
Advertise With Us

Popular Podcasts

On Purpose with Jay Shetty

On Purpose with Jay Shetty

I’m Jay Shetty host of On Purpose the worlds #1 Mental Health podcast and I’m so grateful you found us. I started this podcast 5 years ago to invite you into conversations and workshops that are designed to help make you happier, healthier and more healed. I believe that when you (yes you) feel seen, heard and understood you’re able to deal with relationship struggles, work challenges and life’s ups and downs with more ease and grace. I interview experts, celebrities, thought leaders and athletes so that we can grow our mindset, build better habits and uncover a side of them we’ve never seen before. New episodes every Monday and Friday. Your support means the world to me and I don’t take it for granted — click the follow button and leave a review to help us spread the love with On Purpose. I can’t wait for you to listen to your first or 500th episode!

Stuff You Should Know

Stuff You Should Know

If you've ever wanted to know about champagne, satanism, the Stonewall Uprising, chaos theory, LSD, El Nino, true crime and Rosa Parks, then look no further. Josh and Chuck have you covered.

Dateline NBC

Dateline NBC

Current and classic episodes, featuring compelling true-crime mysteries, powerful documentaries and in-depth investigations. Follow now to get the latest episodes of Dateline NBC completely free, or subscribe to Dateline Premium for ad-free listening and exclusive bonus content: DatelinePremium.com

Music, radio and podcasts, all free. Listen online or download the iHeart App.

Connect

© 2025 iHeartMedia, Inc.