All Episodes

August 13, 2025 • 36 mins

As US tech companies double down on artificial intelligence, pouring billions into new data centres and offering eye-watering compensation packages to secure the best talent, a different path is emerging for New Zealand.

Catalyst Cloud co-founder Don Christie returned to The Business of Tech podcast this week to lay out his vision for sovereign AI, one where open source models and local infrastructure pave the way for the country’s digital future.

While Christie welcomes the recent government effort to devise a national artificial intelligence strategy, he was clear-eyed about its limitations. 

“My take is that the government is making a start... I thought it was quite generic in its application,” he says, noting that while the strategy offers guidance for small businesses dipping their toes in AI, it stops short of investing in the infrastructure or innovation needed for real autonomy.

Christie is adamant that New Zealand can, and must, chart its own course by leveraging open source AI. Catalyst Cloud runs on the OpenStack cloud platform and has worked with the likes of Te Hiku Media to apply large language models in the cloud to New Zealand-specific applications.

“The technologies are there. You don’t have to build it from scratch. We’ve done this with Linux. We’ve done this with OpenStack in the cloud space. And as open source models begin to mature... the opportunities to build self-determination within New Zealand will explode,” he said.

Listen to episode 111 of The Business of Tech in full, powered by 2degrees Business, streaming on iHeartRadio or wherever you get your podcasts.

See omnystudio.com/listener for privacy information.

Mark as Played
Transcript

Episode Transcript

Available transcripts are automatically generated. Complete accuracy is not guaranteed.
Speaker 1 (00:02):
Cura and welcome back to the Business of Tech Powered
by Two Degrees. I'm Peter Griffin, and today we're looking
at what I think is one of the most consequential
questions facing New Zealand's digital future. How do we build
real autonomy over the tools, data and infrastructure that increasingly
run our economy and society. The recent national Artificial Intelligence

(00:25):
Strategy released by the government, well, it had a pretty
tepid response from industry and from academics. It was seen
as underwhelming, missing critical areas, and with little in the
way of firm goals or targets to guide the way forward.
There was no mention of sovereign AI, a dedicated effort
to build our own AI capability, so we aren't reliant

(00:48):
on overseas platforms and technologies. Other countries have prioritized this.
The Swiss, for instance, have just released a large language
model developed on their own supercomput by their own scientists.
But they've also open sourced the model so that anyone
can use it for free. The government may not have

(01:09):
mentioned it or be thinking about it, but leveraging open
source AI is probably the best way for us to
avoid being reliant on models from the likes of open
AI chat GPT debuted last week too decidedly mixed reviews.
So Don Christy returns to the Business of Tech this
week to discuss open source AI. Donn as co founder

(01:32):
of Catalyst It and Catalyst Cloud, which is built on
the open Stack open source platform. He's got some interesting
ideas on what sovereign AI should actually mean here. He
sees it as an issue of self determination, choosing architectures,
models and governance that keep New Zealand in control of
critical systems and sensitive data. We get into the nuts

(01:56):
and bolts of open source AI. Y open models and
transparent training data sets matter how locally owned cloud can
deliver practical AI today without the need to spend hundreds
of millions on high powered hips from Nvidia. We also
tackle the thorny parts political bias and models, data exposure

(02:16):
under foreign jurisdictions, and the risk of defaulting to bundled
AI just because it might be included in the license agreement.
All that and more on the Business of Tech, So
stay tuned for the interview at dawn.

Speaker 2 (02:37):
Don Chrissy, how are you doing.

Speaker 3 (02:38):
I'm doing very well, thank you.

Speaker 4 (02:40):
Yeah, there's lots of exciting things happening, and thank you
very much for having me back.

Speaker 1 (02:44):
Well, it's great to be back at Catalyst I Catalyst Cloud.
Gloomy day in Wellington. It feels like, you know, winter
is dragging along, but a lot of really cool stuff happening.
We just saw this morning, for instance, the public release
of chat GP a little bit delayed. It's been sort
of hyped up. Have you had to play with it.

Speaker 3 (03:04):
Yet at the moment.

Speaker 4 (03:06):
My tool that we use for corporate services, Catalyst, that
tends to be claued AI. We find its tone much
more aligned with Catalyst tone, and it's we find it
safer tool to use as well. It's more thoughtful if
you can humanize it that way in terms of the
responses it gives.

Speaker 1 (03:25):
As Sam Orman has said, incremental gains. Everything's just getting faster,
more efficient, better subject matter expertise, better reasoning powers. And
I guess that's the journey we're on. We've got all
these big players who have billions of dollars of investment
now high stakes because they need to return on investment,
and that's really what we're going to talk about. In

(03:47):
the context of this conversation, there's been a lot of
discussion about so called sovereign AI and the role that
open source can play in it.

Speaker 2 (03:56):
And you've got a lot of expertise in that area.

Speaker 1 (03:58):
But just to maybe give this conversation a bit of context,
we had a few weeks back the government put out
a national Artificial Intelligence strategy for New Zealand and frankly,
it was not well received. The tone of commentary on
LinkedIn from experts in the field very disappointed, and it's
seen as more of a vision statement really than a strategy,

(04:21):
and certainly not backed up with any significant new investments
or initiatives around AI. What was your take on it
when you read it?

Speaker 4 (04:28):
My take was probably more nuanced than that. My take
is that the government is making a start. When I
looked at that strategy, I thought it was quite generic
in its applications, So if you were a small business
for a manufacturing company, it would give you some idea
of what the technologies could be used for in your
day to day work. And my understanding, certainly in talking

(04:51):
to people in government, is that this is a start
and they really want to hear from New Zealand organizations
that are doing amazing stuff with AI, they really do,
and then from that they can develop a strategy, funding
and so on. So you know, we partner with companies
like Tehikomedia who have been doing some amazing work in

(05:14):
AI in the Maori language.

Speaker 3 (05:16):
Space for decades.

Speaker 4 (05:19):
David Bribner, who founded You Imagine, I don't know if
you've heard of them. Incredible work that they do, you know,
from safety on construction sites to quality control on EV
manufacturing plants in Europe, applying the AI technologies in so
many different ways. So it's important for New Zealanders not

(05:40):
to allow our narrative to be smothered by the narratives
that's coming largely out of the US, and not to
feel that we are helpless actors because we don't have
five hundred billion dollars to build a data center.

Speaker 1 (05:54):
Yeah, but having I totally agree with that. But the
tone of the document really sort of it could almost
have been written by the multinationals.

Speaker 3 (06:02):
You know.

Speaker 1 (06:02):
It's basically like it's all about adoption. The priorities get
everyone using it, but we don't really care what they're using.
And the obvious choice at the moments is these multinationals,
it's as you're using open AI. There was no mention
in their off sovereign AI, you know, building our own
capability ourselves. There was reference to data sovereignty and interesting

(06:26):
your views on what sovereign AI is and actually should be.
Is it full stack development off our own AI from
hardware through to the models that run on it, or
can we do bits of that that allow us to
have enough autonomy to steer our own path on AI.

Speaker 4 (06:45):
So I mean sovereignty means different things to different people,
particularly in the New Zealand context. There's jurisdictional sovereignty and
then there's langoti danga sovereignty and self determination. So from
my perspective, in the open source narrative, we talk about
self determination a lot, and that can apply to individuals, organizations,
communities and countries. So the question then is how do

(07:10):
these technologies remove self determination or how do they build
that self determination. I was at the largest open source
conference in Australia New Zealand in January, the same day
that Trump's inauguration took place, and Elon Musk was doing
Nazi salutes. All those people tech billionaires that were on

(07:33):
stage with Donald Trump. They built their empires out of
open source. My message to the government into New Zealand
and This speaks to the strategy is cut out the middleman.
The technologies are there, you don't have to build it
from scratch, as you were saying, is one option. The
technologies are there. We've done this with the Linux, We've
done this with open stack in the cloud space. And
so as the open source models start maturing, suddenly the

(07:57):
opportunities to build self determination within New Zealand, however you
want to think about, that will explode. And if we're
thinking about strategically and what the government could be doing,
we should think about making sure we don't close off
those opportunities, and that we encourage Kiwi businesses that are
investing in those opportunities. But with patronage making sure that

(08:21):
the government and corporates and so on use it.

Speaker 1 (08:24):
And two with investment, we certainly haven't cut off those opportunities.
We're just not really doing anything to pursue them. I
think is the frustration that tech community, the business community
is to some extent feels. But as you said, there
are some great open source large language models out there now.
Obviously Meta interestingly has been a leader, although their latest

(08:45):
version disappointed in terms of its benchmarking. But the one
there's a lot of excitement around is the Swiss large
language model developed on the Alps supercomputer. They have a
massive supercomputer in Switzerland. Ten thousand Invidia GPUs twenty million
GPU hours annually is available to them. Eight hundred researchers

(09:05):
across Switzerland are involved in this government funded effort to
build an open source large language model that has been released.
I don't know if you've looked at it interested in
your perspective on how we could use that.

Speaker 4 (09:18):
So what I see is that initiative, whether it's a
Swiss one or another one, will be something that basically
the rest of the world coalesces around and further developed.
We saw it happen with open stacks. That was a
cloud platform, open source that came out of a collaboration
between CERN and NASA who were having to store and
process the largest data sets that the world had ever seen,

(09:41):
and they released their open stack product. And that's what
we're using Catalyst Cloud. It was Hyperscale before hyperscale was
invented as a term, and it became the largest IT
project in the world with contributions from anyone that wasn't
AWS or Microsoft basically, and we'll see that happen with
examples like the Swiss model, and they will iterate, and

(10:03):
I'll keep repeating, and we will be able to take
those models and where appropriate, bring them to our own servers,
our own GPUs, and build on them and build in
local context and build in local solutions, and be able
to provide that at a very individual level to organizations
that want to keep their self determination very close to hand,

(10:27):
at an incredibly affordable price. So that cuts out the
need and kind of undermines this whole narrative that you
have to have five hundred billion dollars to build a
data center in Texas, where you need to reopen a
blown up nuclear power station on one mile island or whatever.
Those narratives are quite deliberately set up to smother the

(10:51):
idea that you can do AI independently.

Speaker 1 (10:54):
So it's obviously cost the Swiss tens, if not one,
hundreds of millions of dollars to build this, right, So
it's a significo investment. But in terms of what is
required for us to take that open source large language
model and put it on Catalyst Cloud or some local
infrastructure and data commerce Spark or the newest supercomputer if
it's technically capable of doing it. What are we talking

(11:16):
about in terms of the technical requirements.

Speaker 3 (11:19):
I don't know.

Speaker 4 (11:19):
I mean the AI research soundboxes that we've set up,
and we've got one that we use internally for R
and D. I think we've got about a dozen different
open source models that people tool around with, and they're
basically those ones are running on one GPU with a
forty Mega whatever's a VRAM. It's not the best performance,
but we've just modernized our GPUs on Catalyst Cloud. We're

(11:42):
getting great performance running these models. We've got clients using
them and ready to launch products on them. Using large
language models with literally just a handful of GPUs, You've
got people like to Hickamedia who are investing for New
Zealand amounts of money, but nothing like you know we're

(12:03):
talking about overseas and doing amazing things with it. So again,
it's it's finding out what people need to use AI
for and providing a contextual solution using these freely available technologies.
Whether it's a Swiss models, or whether it's small language
models quite specific, or whether it's some other AI technology

(12:27):
that's been around for a long time. There's a lot
of sort of machine learning capability that we've used for
our clients for decades.

Speaker 1 (12:34):
If it's simply as good as the Claudes and the
open aiyes, the fact that we are fine tuning at ourselves.
It's open source. There's this concept emerging off, you know,
federated AI. So at a national level it could be
the Australians, the Swiss and all of that. We contribute

(12:57):
many Maybe it would be AI expertise research capabilities as
well as our investment in hardware and software to an
effort that countries at a national level can take advantage
of so we can create something. Sure we'll still have
the hyperscales here and we'll be using those to some extent,
but the default option could be the national AI that

(13:19):
we've created ourselves.

Speaker 4 (13:20):
Yeah, this is where concepts like matarangamori are really helpful,
because if you think about the governance of knowledge and
the concepts that some knowledge is there and been there
for hundreds of years to be shared, some knowledge is
there to be shared in specific contexts, and some knowledge
is not to be shared. And if you have those

(13:41):
sort of governance contexts clearly understood, then you can absolutely
do what you've described. You can say, well, here's our information,
here's our knowledge, here's our data. We can contribute this
to the national infrastructure, but it's got to stay national
or no, no, we can actually contribute that to a
global knowledge base. But as as we start getting closer

(14:02):
and closer to our own issues of self determination, then
we want.

Speaker 3 (14:06):
To be able to.

Speaker 4 (14:08):
Use the technology on this data, but only for ourselves.
And those are the sort of nuances and models that
you know, open source software and open source large language
models or any language model can enable. And in the
Swiss context, they're talking about the algorithms being open, the

(14:30):
data sets that they're using for training being.

Speaker 3 (14:33):
Clear and.

Speaker 2 (14:35):
Doesn't do that.

Speaker 3 (14:36):
No, what deepca.

Speaker 4 (14:37):
Has trained itself on open LAMA and then added stuff
on top.

Speaker 3 (14:41):
Of that, and that's how open source works as well.
There is nothing wrong with.

Speaker 2 (14:46):
What others have done.

Speaker 3 (14:47):
It's fine.

Speaker 4 (14:48):
And you know, the scary thing about some of those
US models now is the US presidency wanting to have
control over what information comes out of those models, So
even if they're open source, they will have biases in
there that will be politically driven.

Speaker 1 (15:08):
The other issue, which you've spoken about repeatedly when it
comes to the changing political climate in the US. The
challenging of legal precedents on a whole host of issues
is the data sovereignty issue. And we had an executive
from Microsoft France, Anton Carneo. He said recently in front

(15:28):
of the French Senate when they ask them, can you
guarantee sovereignty of data for your customers in France? And
he said, look, I can't, no, because there is the
Cloud Act introduced in twenty eighteen. Technically the US government
could ask for this information that hasn't happened, and Microsoft
takes it very seriously.

Speaker 4 (15:46):
Well that we know of, because under other situations, the
US government can ask for data on foreign nationals that
they don't have to notify those requests about.

Speaker 3 (15:59):
Right, That's a cute bit that they kind of don't highlight.

Speaker 4 (16:04):
Yeah, and the fact that Microsoft pulled some of the
capability for the International Criminal Court in Europe just shows
how long that reach is.

Speaker 2 (16:13):
Yeah.

Speaker 1 (16:14):
So that argument remains that you've made quite strongly in
terms of data sovereignty. If you really want data sovereignty,
it's got to be on sovereign infrastructure, local cloud providers essentially.

Speaker 3 (16:27):
Yeah, and locally owned.

Speaker 4 (16:28):
I mean, you know, it doesn't matter if your Amazon Microsoft, Oracle, Google,
You are still a US company just because Larry Allison
hired Russell Cootz. He didn't suddenly become Team New Zealand, right,
And so it's about understanding those risks. I'm not going
to say don't use them, of course not. You know,

(16:48):
that's that ship of saled and we're not going to
cut our noses off. But we do need to support choice,
and we need to support resilience, and we need to
be aware that our supply chains are no longer as
secure and allied to our interests as we might have
thought there were five or ten years ago.

Speaker 2 (17:11):
So we've got a strategy. There is some good stuff
going on.

Speaker 1 (17:14):
We've got a new research institute that's going to have
AI capability as part of it, so hopefully that will
inject some money into the research community. Where should our
priorities lie do you think, I mean, should it be?
Should we be looking very seriously at this point at
open source large language models, running them on our own
infrastructure here, fine tuning them. Is that something that is
going to shift a needle for us?

Speaker 4 (17:36):
What we have New Zealand is seen as a trusted
leader in many areas. You know, that's why our produce
sells so well. You know, we're certified that we feed
our charas on grass all that sort of stuff, and
that trust should enable us to be world leaders in
the digital space and particularly in AI. And so this

(17:57):
is a huge opportunity, so adless of the ethics of
sovereign AI. From a value perspective, New Zealand is a
place now where indigenous populations are beginning to ask us
to look after their data because in their own jurisdictions
they're under threat and that's again because of Maori leadership

(18:19):
and t Tariti and things like that, and so we
should be able to build on that and I just
don't want us to miss out on that opportunity. One
of the biggest opportunities, the most important ones from a
New Zealand economic perspective, would be the use of AI
in agriculture and giving our farmers the tools to create,

(18:41):
produce and look after their land in ways that leaves
them in control of their livelihoods and their businesses that
you just don't You won't get if all your data
is going to John Deere in Montana or Wisconsin, wherever
they headquarters, because as soon as you start allowing that
to happen. And it does happen because John Dea collects
a lot of data from its machinery and it does

(19:03):
a brilliant job processing it. But then you're in this situation, well,
who owns a farm, you know? Is it John Deere
or is it the farmer? And I think from an
national economic perspective, thinking about where those solutions are from
major exporters is a big opportunity for us and again
something we can lead the world on. And again there

(19:24):
are open source projects out there. At Multicore World, one
of the US universities, interesting enough, was talking about an
open source Agricultural AI project that gives farmers the power
in Sub Saharan Africa or wherever to analyze their crops,
their land or whatever and decide on its health or

(19:45):
what inputs are required and when and so on. So
those are the sort of opportunities that we should be
looking to bring to New Zealand and create this sort
of independent capability rather than just saying no, you can't
have it because it's too risky. Yeah, or she just
leave it to John Deere or open AI.

Speaker 1 (20:03):
That's right, it's it's to just leave it to them.
And that is sort of our default position. I think
in government procurement, for instance, it's like, well, we've got
you know, embedded, we're all using Microsoft three sixty five.
What have you got on the AI front. We'll just
bundle that into our licensing. That's sort of been the
way we approach this.

Speaker 4 (20:20):
Well, and then that allows Microsoft to put their fees
up twenty percent in the year yes or whatever it
was exactly, you know, So again it makes no sense
from an economic perspective. And it's not to say that
their products are not good, it's just to say this
is stupid. You want to modularize your approach to technology.
You want to make sure that your your layer interoperability

(20:41):
using open standards, you know, so you can plug in
the right solution and you're not just relying on this
homogeneous stack, which always leaves you with one answer.

Speaker 1 (20:51):
So, if the minister had come out with the strategy
and said we've got fifty one hundred million dollars earmarked
for a for the national good, what would you have
spent it on?

Speaker 4 (21:04):
Well, I've talked about agricultural opportunity. I would look at
things through the lenses of some of our maori businesses
and what they're doing, because again that gives New Zealand
a very unique perspective on technology, and it's a perspective
that actually the world wants. So if you look at
what Europe is doing, it's quite similar in a way

(21:25):
in terms of their concerns. The world is desperate for
solutions that allow agency and self determination to be sustained.
And if you can't do that, then your society starts
losing trust in its institutions, your democracies start failing, and
all these sort of bad things start happening. And I

(21:45):
think New Zealand's in a prime place. So those are
two areas I would look at, and then I would
look at the companies, and of course, you know, i'd
put catalysts down as one that can enable these things
to happen. Catalysts, dragonflies another one. Nicholson's is another one.
You know, there are lots out there, a surety they've
brought an AI engine to their own cloud tendancies to

(22:08):
do a lot more test automation and QA automation. People
aren't sitting back just with their arms folded in New
Zealand doing nothing. They're developing stuff. But we need the
opportunity to sell it and if government isn't willing to
experiment and to engage with us, then these opportunities won't
be realized. The Parliamentary Council Office did a great suite

(22:31):
of proof of concepts at the start of this year
and we were involved in a couple of them. When
with Dragonfly, who I mentioned around legislation and using AI
to summarize legislation, to allow politicians to query legislation to
determine what it might mean.

Speaker 3 (22:47):
And certainly these are just case studies.

Speaker 4 (22:48):
The more you have government agencies ready to do that
kind of experimentation or clusters of them, the more opportunity
they'll be to discover what we can do here in
New Zealand.

Speaker 2 (22:58):
And it's been a bit slow.

Speaker 1 (22:59):
We had famously we had gov GPT out of Callahan,
which unfortunately has been shut down now.

Speaker 4 (23:06):
The thing about that as a supplier, it just used
co Parlat or one of them GPT, I'm not sure
which one. I couldn't quite understand where they were taking
that initiative.

Speaker 1 (23:16):
I think it was supposed to be just a customer
service chatbot if you wanted to navigate government departments. There
was the in affording government departments, so it was all
very safe. You weren't going to get hallucinations, you weren't
going to get inaccurate information. It would serve up exactly
so something that you would expect now just to be
built into a customer service delivery for government services. But

(23:38):
you know, whereas the innovation out of you know, we've
had the merger of AG Research and land Care Research,
so there's potential there in this new Biosciences Research division
that we've got to do some really cool AI stuff
there in the health sector and very sensitive data, very
particular issues we're grappling with in New Zealand around public health.

(24:02):
Where is the real drive and effort to coordinate across
all these great health tech companies yep, get some momentum
going the high level stuff yep, tic tick OECD principles,
great guardrails for government departments. We've sort of done all
of that, But the actual what's going to drive the
really innovative stuff that was what was missing.

Speaker 4 (24:22):
It is applying context, local context in local needs and
not just defaulting to the homogeneous de fault that's coming
out of the US and the risks of any of
over reliance on any overseas platforms is always there, but
it's just got heightened. You know, how many people are
not traveling through the US because some of their health

(24:44):
data might have landed in the wrong hands. There are
states now that will throw you in prison if you've
had a miscarriage. That doesn't feel like a safe place
to hold our health data. And yet it's there. This
isn't so like trying to cry wolf. It's trying to say, well,
you know, if we addressed that issue and we came
up with a solution, we could sell that.

Speaker 1 (25:05):
And this is very topical what you say there, because
the Trump administration has just now some massive project to
harmonize data across the health sector, including in conjunction with
the big tech giants. So all that Apple data. Apple
has been very proactive about gathering data from Apple watches

(25:26):
in the iPhone itself. It's integrated GPS into that, very
progressive stuff. All of that data being shared with Medicaid
and all of that. That is the end goal now
for the Trump administration. Sounds great in terms of getting
more visibility into your healthcare, but what are the implications
of that.

Speaker 4 (25:44):
Well, until the Elon Musk's though Group comes in and
forces its way into that data, regardless of what legislation
and international treaties or anything like that said.

Speaker 3 (25:55):
They just forced their way in.

Speaker 1 (25:56):
So the impetus there, I think is as strong to
do some sort of sovereign AI for these really sensitive things.
What can we do in the next couple of years,
do you think, given the trajectory that the government is
putting us on, what are some wins that we could chase?

Speaker 4 (26:11):
I think do more of these things like the Parliamentary
Council Office has been doing, and it is thinking about
the governance and the policy very in principles. That's quite
straightforward and simple. But think about it. Get them, get
them out there, get them done. As I said, the
Marii lens is an enabler apart from it's the right
thing to do, because Titi, it's actually an enabler and

(26:32):
it really I think sometimes people don't recognize that it
makes the job of all New Zealand organizations easier, the
fact that this part of our society has put so
much thinking into it because it applies broadly right that
the thinking and then the work that's been done in
that space, and it also helps give New Zealand a

(26:52):
competitive advantage. So those are two things, and the others
is just invest in support, findingives, find the companies that
are doing interesting things, create a space for them to
come into. It's quite hard for New Zealand companies to
get an individual voice or even a collective voice at

(27:14):
the table, and it's sometimes hard for the government to
reach them because there's so many of us and relative
to you know us type companies, we're small and we
don't have their lobbying powers and buckets.

Speaker 2 (27:26):
Has the government listened to you?

Speaker 1 (27:28):
Have they sought your advice on the National AI strategy,
for instance.

Speaker 3 (27:32):
Not before it was published.

Speaker 4 (27:34):
On the other hand, I've been fortunate enough to have
a meeting with the Minister sing Riti recently and some
of his team, which is fantastic and.

Speaker 3 (27:41):
Like it was very, very engaged in the topic.

Speaker 4 (27:45):
The other thing I would encourage New Zealand businesses like
us to do is to leverage the fact of you know,
the degrees of separation in New Zealand being very small,
and keep using the contacts that you might have with politicians,
with government people within your communities to build that New

(28:06):
Zealand narrative.

Speaker 1 (28:06):
Okay, well, and what's ahead for catalysts in the AI space.
You're working with Thelexa ta Hiku media and that some
innovative stuff there. Are you seeing more demand for applications?
For instance, the whole AI agentic AI thing is that
on your radar.

Speaker 4 (28:23):
Various aspects of it at all on our radia of course,
because you know, in order to stay relevant to our clients,
to our staff, to our country, we have to be
on top of everything. I think for us, we're just
having a lot of fun with the fact that we
have a cloud company and so we can kind of
go to town on a bunch of technologies, and you know,

(28:44):
we're pushing capability into some of our product sets, which
are largely in the education space, but also trying to
find ways of working with our clients to help them
realize their own product sets and outputs and so on.

Speaker 3 (29:01):
And it's a lot of fun.

Speaker 4 (29:02):
One of the really good bits of advice I got
from another CEO and another digital company was to build
an AI practice group in the company right before you
even started rushing into it. And we did that not
just with engineers, but with project managers, with designers, you know,
so across the company to play with tools to play

(29:22):
with them in a safe environment to you know, kind
of develop principles, and that has driven understanding and acceptance
and it's been incredibly useful and the people that have
been involved have been great. So that's an approach companies
can take. I set up something called the AI Discovery
Lab which people can register for, and it just does
a little bit what you.

Speaker 3 (29:43):
Were saying you do.

Speaker 4 (29:44):
It allows you to sort of use a couple of
language models on Catalyst Cloud, put in a prompt, get
two answers back, and then reflect on those answers and
reflect on the fact that if those answers aren't perfect,
surely that means that a human with some capability and
not needs to be involved in checking them, in creating

(30:06):
a narrative that absolutely suits what they're trying to do,
and so on. So it's kind of about reflection and
agency and so on.

Speaker 1 (30:13):
What exactly is involved when you take a large language
model and as they say, fine tune it for your needs?

Speaker 2 (30:19):
Is that quite.

Speaker 1 (30:20):
A there's process in computer processing involved in that, but
there's also human oversight of that as well. Right, Is
that a complicated process to undertake?

Speaker 4 (30:30):
What I know is that you can basically bring your
data sets and use tools called rags and there are
other tools as well and sort of add that training
and context to the large language model, and that gives
you a much more accurate response to specific query. So
often those models fail when you get very contextual and

(30:51):
very specific. You know, according to chat GPT a year ago,
I was a great musician, got a few other things right,
but it was wrong about that assure you. So when
you get down into those sort of contextual things and things.
But there are other ways of doing that as well,
with small language models and so on, potentially more effective
in the long run in terms of how costly it

(31:12):
is to keep retraining them and refreshing them.

Speaker 1 (31:15):
Okay, well, it sounds like you're talking to the minister,
which is great as one of the big open source
platforms in the country. That's excellent to see in terms
of the pace of development. It seems as though there's
some discussion about whether we've hit some HRD limits on
on AI. And you look at the money that's still

(31:35):
going in to this, you know, billions Microsoft and it's
quarterly results recently up to spend again for the next
quarter thirty billion in one quarter.

Speaker 2 (31:44):
One company.

Speaker 1 (31:46):
Is this a bubble or is this a genuine effort
to build the infrastructure of the future.

Speaker 4 (31:52):
It's I think it's a bit of both. It's possibly
more bubble. What those companies are facing is a shortage
of what they call tokens in other words, words, they've
run out. That's why they're stealing people's data. That's why
they're trying to push into knowledge sets and data sets
that they have no rights to push into. That's why
the in breach of copyright all these things, right, because

(32:14):
they've run out of data to train their algorithms on.
Your data is far more valuable to those AI companies
than it ever has been in the past. That's why
they're pushing into your phones. You don't have the choice
of whether or not to switch the AI off and
WhatsApp because they're basically trying to collect data to retrain
their models. I think that's pretty unethical and evil, and

(32:39):
that's why they're hitting limits. Yeah, it's nutty as well,
because you know, you sort of think, well, how much
more do we need of those kind of models when
there's all these other opportunities to be exploring, you know,
using those sort of technologies. Yeah, so yeah, I don't know.

Speaker 1 (32:56):
And the other constraint obviously they're coming up against is talent.

Speaker 2 (32:59):
You know, when willing to spend two hundred and fifty.

Speaker 1 (33:02):
Million dollars on one, albeit very talented, twenty four year
old AI developer, they're clearly desperate for smart people who
are going to lead to the next big shifts.

Speaker 3 (33:13):
I don't understand that.

Speaker 4 (33:14):
I think the other thing just to keep in mind
is that GPU and you look at in videos market
cap it's it's stupendous at the moment. But GPU technology
is old technology. There's equally old technology.

Speaker 3 (33:28):
You know.

Speaker 4 (33:28):
They are current to GPUs that runs in your phones
that we were using on the SKA project to stream
data out of telescopes and to basically do some very
quick data analysis on and so low powered parallel processes
I think are going to be the future of AI.
Everyone at the moment is buying GPUs, which are expensive

(33:50):
in many ways, they're terrible for our planet in terms
of their energy usage. And everyone is doubling down on
this GPU space, and the people that get the breakthroughs
on low power parallel processes are going to win.

Speaker 1 (34:04):
And we do have some smart people on parallel processing
here in New Zealand as well. Yeah, maybe that's the
other if we are going to do anything hardware related,
it's what is the alternative to GPUs.

Speaker 3 (34:15):
Yep, completely Yeah.

Speaker 1 (34:17):
Well thanks Don as always a very enlightening conversation. Good
luck with things at Catalyst, and hopefully you'll have some
input into these plans.

Speaker 2 (34:28):
It is a starting point, as in the marriages.

Speaker 1 (34:30):
I think there's a lot of scope for improvements and
expanding on it.

Speaker 4 (34:33):
Yeah, and Peter, I hope this makes the recording. Thank
you you doing God's work with all this, you know
work that you're doing the technology space for New Zealand.

Speaker 1 (34:43):
So thank you, Thank you, and thanks to everyone out
there for listening to these are important issues close to
light anyway.

Speaker 2 (34:49):
Thanks Don.

Speaker 1 (34:55):
Thanks to Don Christy for candid, practical tour of what
sovereign can look like when it's grounded in open source,
clear governance and locally controlled infrastructure. A few takeaways from
me major led data governance principles are an enabler for
trustworthy systems. A lot of work has been done in

(35:16):
the space where world leading in it. We can apply
them to trustworthy AI in conjunction with open source efforts
like what the Swiss are doing. Sovereignty is a design choice,
not a dollar amount. Federated open models give us leverage
and procurement that prizes interoperability over commercial bundles. Don rightly

(35:38):
points to AI in agriculture being an area of opportunity
for us. Just look at Halter's use of AI in
managing cowherds, and there's a need to keep experimenting in
government and in the private sector with open source AI
at the heart of it. That's it for this week's
Business of Tech. If this sparked ideas, share the episode

(35:59):
with a COG and government, a founder building with open models,
or a CIO rethinking their AI stack. And if you're
working on a New Zealand sovereign AI pilot, especially in
health or agriculture, I'd love to hear about it for
a future show. I'm Peter Griffin. Thanks so much for listening.
I'll catch you next week with another episode of the
Business of Tech Matowa
Advertise With Us

Popular Podcasts

New Heights with Jason & Travis Kelce

New Heights with Jason & Travis Kelce

Football’s funniest family duo — Jason Kelce of the Philadelphia Eagles and Travis Kelce of the Kansas City Chiefs — team up to provide next-level access to life in the league as it unfolds. The two brothers and Super Bowl champions drop weekly insights about the weekly slate of games and share their INSIDE perspectives on trending NFL news and sports headlines. They also endlessly rag on each other as brothers do, chat the latest in pop culture and welcome some very popular and well-known friends to chat with them. Check out new episodes every Wednesday. Follow New Heights on the Wondery App, YouTube or wherever you get your podcasts. You can listen to new episodes early and ad-free, and get exclusive content on Wondery+. Join Wondery+ in the Wondery App, Apple Podcasts or Spotify. And join our new membership for a unique fan experience by going to the New Heights YouTube channel now!

Dateline NBC

Dateline NBC

Current and classic episodes, featuring compelling true-crime mysteries, powerful documentaries and in-depth investigations. Follow now to get the latest episodes of Dateline NBC completely free, or subscribe to Dateline Premium for ad-free listening and exclusive bonus content: DatelinePremium.com

On Purpose with Jay Shetty

On Purpose with Jay Shetty

I’m Jay Shetty host of On Purpose the worlds #1 Mental Health podcast and I’m so grateful you found us. I started this podcast 5 years ago to invite you into conversations and workshops that are designed to help make you happier, healthier and more healed. I believe that when you (yes you) feel seen, heard and understood you’re able to deal with relationship struggles, work challenges and life’s ups and downs with more ease and grace. I interview experts, celebrities, thought leaders and athletes so that we can grow our mindset, build better habits and uncover a side of them we’ve never seen before. New episodes every Monday and Friday. Your support means the world to me and I don’t take it for granted — click the follow button and leave a review to help us spread the love with On Purpose. I can’t wait for you to listen to your first or 500th episode!

Music, radio and podcasts, all free. Listen online or download the iHeart App.

Connect

© 2025 iHeartMedia, Inc.