Episode Transcript
Available transcripts are automatically generated. Complete accuracy is not guaranteed.
Speaker 1 (00:08):
Welcome to the Business of Tech powered by two Degrees Business.
I'm Peter Griffin and in this episode I'm talking to
Greg Davidson, the man responsible for leading the country's biggest
IT company, which employs five thousand workers across Australasia and
through its payroll platforms, is responsible for paying around half
(00:29):
a million Kiwis every fortnight. Datacom is celebrating sixty years
in business.
Speaker 2 (00:36):
It's a true.
Speaker 1 (00:37):
Kiwi success story, powering some of the biggest digital transformations
in our region in a private sector as well as
in government. It has a roster of big clients across
the Tasman as well. Datacom was founded back in nineteen
sixty five in christ Church.
Speaker 2 (00:53):
It's weathered every.
Speaker 1 (00:55):
Major technological wave since the rise of computers, from mainframes
to the Internet, mobile devices, the cloud of course, and
now artificial intelligence. As Greg Davidson, who is Datacom's group
CEO and has led the company for eighteen years, shares
in this interview, Datacom faces another technological turning point with
(01:18):
AI navigating the risks and extraordinary opportunities of the technology
at a time when every organization large and small faces
unprecedented change It's a wide ranging conversation about tech, about
leadership and what comes next for both Datacom and New Zealand.
Here's the interview with Datacom's Greg Davidson.
Speaker 2 (01:46):
Greg Davidson, Welcome to the Business of Tech. How are
you doing?
Speaker 3 (01:50):
Very good, Thanks, Peter. I'm really pleased to be here.
Speaker 1 (01:53):
Yeah, and it's a big year for Datacom. Datacom turns
sixty this year. Founded in nineteen sixty five, our biggest
New Zealand owned IT company spans Australia and New Zealand.
Speaker 3 (02:07):
One of the.
Speaker 2 (02:07):
Biggest employers in Australasia. Incredible company, incredible success. You've been there,
This is pretty amazing. Nearly half of that time, twenty
eight years. You joined Datacom in nineteen ninety seven, became
the chief executive officer in two thousand and seven, which
is crazy to think about it the year that iPhone
(02:28):
was released. So this company has seen numerous waves of
technological change. You've been there basically for the rise of
the Internet, the rise of smartphones and apps, the cloud revolution,
and now that AI revolution. We're going to talk about
all of that, but maybe just take us back to
your personal journey into datacom what you grew up in
(02:51):
the Hut Valley and then went into what computer science
at Victoria.
Speaker 3 (02:55):
Yeah, that's right, if you want the slightly longer version
of it. I had two parents who are teacher, so
I grew up in Hawk's Bay in Parmerston North and
then there was a journey down to Wellington at one
point where she moved between the two, and then it
was high school in the Heart Valley. Yeah. Data Comm
turning sixty, it is older than me. I just I'm
going to hang on to that for as long as
(03:17):
it began in the bureau era and has been through many, many,
many changes, and I think as we sort of expand
on the conversations we've got today about the different areas
of computing, you'll find that data Comm's had to adapt
through it. My personal journey, I'm a lapsed software engineer,
so I studied software engineering at Victoria in Canterbury and
then went through a kind of small business doing lots
(03:40):
of small jobs for small business owners into working at
the point where computers really impacted design, and then multimedia
and then the Internet, and then my journey took me
to dot com A couple of years after the Internet
really exploded across the world.
Speaker 1 (03:56):
Yeah, and that was a huge area of growth for
data com You mentioned that, you know, it came out
of the bureau era, so that was nineteen sixty five.
The Computer Bureau Limited Company was the precursor company to
data com. This was the founders Paul Hargreaves, doctor Bernard
Battersby I think was an economics lecturer at Cannabi University
at the time. Paul was an accountant. They decided to
(04:19):
let's buy one of these early sort of really powerful computers.
It was the ICL nineteen oh two, it was called.
They raised thirty thousand pounds at the time from shareholders.
What did they start out doing with this sort of
bureau approach to computing?
Speaker 3 (04:34):
The answer to your question is a superlogical one. What's
the thing that requires lots of calculations that you would
buy a computer and share it across multiple companies to
do in that era? Payroll? Yeah, you know, we're still
a payroll provider now. We have a very modern fac
two very modern versions of that and Smartly and data
pay and we pay about half a million key weeks
(04:55):
every fortnite using those platforms, and in that would be
over thirty thousand small to medium businesses. So if you
want that original starting point back in the sixties, it's
iterated several times. We've invested heavily in a modern generation
of it and it still exists as a small piece
of what we do. Not everybody knows because of course
Smartly operates under its own brand and deals with a
(05:16):
lot of small business, which is a bit different to
what the data com brand is usually associated with. The
New Zealand.
Speaker 1 (05:21):
Payroll has been the constant throughout and on both sides
of the Tasman.
Speaker 2 (05:26):
Now you do payroll.
Speaker 1 (05:28):
In the seventies it was the introduction of Cobol for
payroll systems. You were integral to that, bringing Oracle to
New zealand getting big into databases as well. So there's
been quite a few milestones there. What was it like
your conversations with the likes of Paul Hargreaves, John Holsworth,
who was a shareholder then became the chairman of Data
(05:50):
Common in the late eighties. You know, last year I
went to Hewlett Packard in Palo Alto and they've preserved
the office there where Bill Hewlett and Dave Packard used
to sit and they positioned their desks so they could
had a line of site with each other. So if
one of them was on the phone trying to do
a deal, he could gestured to the other guy to
(06:10):
come over and join the conversation if we needed him
to get the deal over the line. And I think
in New Zealand, the likes of Paul High Greaves, Bernard
battersby John Holdsworth sort of have that same sort of legacy.
They were some of the pioneers of our tech sector.
But interested in your interactions with them and what you've
learned from them over the years.
Speaker 3 (06:30):
I met Paul obviously and had plenty of opportunity to
interact with them. So I joined during the era when
John was checked in the late nineties and Frank Stevenson
was CEO. You know, if you look at other founders
that exist around the world, or people who really kind
of put their DNA into a company, John's was worked
back from customer. It was a essential to a services organization,
(06:53):
the reputation for what you can do in the market
and the ability to find the intersection of what your
customers need and what we're good at or what we
must become in order to be able to fulfill it
I still think is alive in what we are today,
and i'd very much pointed out there was another piece
that was to do with you know, some of what
you do for customers is very specific to them. But
(07:13):
on the other hand, there's some things where New Zealand's
relatively small and finding economy of scale can be quite
a challenging thing. And so a lot of what we
do in the New Zealand market and the Australian market
is where customers can't find their own economy of scale
and doing it, and therefore we can do it on
a shared basis for them that makes sense. And then
there's another piece of it, of course, which is the
coding software development, build systems for either broadly to meet
(07:38):
need in the market or very specifically for customers businesses.
Speaker 1 (07:41):
And you've definitely achieved that economies of scale. Just recently
you put out your financial results for the full year
and one point four eight billion in revenue, and you
know you've been over the billion dollar MARC for several
years now, so this is a huge scale you've got there,
thirty seven million dollar profit. I guess it was a
(08:01):
pretty tough year last year, so good to be profitable
and to be reducing your debt and growing revenue as well.
Speaker 3 (08:10):
Because we operate in New Zealand and Australia, and because
we operate across quite a lot of different kinds of activities,
the balance between the two does give us some ability
to cope a wee bit better with market fluctuations and
certainly the fact that New Zealand's economy is really hurting
at the moment and it's taking a long time for
business confidence to come back, and so every organization quite
(08:30):
sensibly during that period it's focused on value for money,
wants what's done to have a path to benefit very quickly.
And of course at the opposite end of the spectrum,
that's very much challenged by the fact that the AI revolution,
i'll call that deliberately and perhaps and maybe even a
bit provocatively, offers just such a huge change in what's
possible with technology, but it does require that you very
(08:54):
actively pursue those benefits. It's not just going to happen
because you buy a product. It's going to happen because
you really focus on where you can find the benefits
and make good choices as to where within new organization
and can make a difference. So and you know, we've
got our own version of that. We've got multiple examples
of how we're adapting what we do internally and going
(09:14):
after automation benefits and artificial intelligence field benefits, and we've
got a whole lot of activity. We've got to the
market with our customers too, where we're helping them adopt.
Speaker 1 (09:22):
You've recently sort of restructured the business, taken a bit
of a trim on headcount, which most large tech companies have,
So talk about that in a minute, but just before
we do the key to that stability and the ability
to stay a large company where there are a lot
of competitors. You've got the multinational consultancy firms are very
active in New Zealand. You've got the likes of Spark
(09:44):
and then a lot of IT integrators in the market
as well. But you seem to be able to over
decades maintain this consistent revenues and loyal customers over the
decades as well. How do you do that but also
be innovative and moving and adapt to emerging technologies.
Speaker 3 (10:02):
If you look at market share data, particularly of the
systems in the greater piece of what we are systems
into greater outsource, so you know the services part, which
is what Datacon and the data com brand's best known for.
In both New Zealand and Australia, you're going to find
that in the landscape of top ten organizations by size.
But that's both revenue and headcount in Australia and New
(10:23):
Zealand combined. We're in the top ten. We're pushing our
way slowly up it, and I've been doing that over
a long period of time that we're in the top five.
Not that that scale and market share is necessary an
answering and of itself, but every competitor is either a
global organization or a tel co in that list. So
we're the only pure play owned in this part of
(10:43):
the world services organization that organizations can look to for
impartial advice about the technology waves that they are hitting,
for what you should use for what, for assistance in
achieving their goals. And I think that position we consider
it to be. I describe it as a responsibility in
that there's so much selling that goes on in the
(11:05):
tech space that doesn't necessarily have what the customers need
at its core. That in order for us to maintain
that over time, we have to be able to really
hear what it is that's the customer's priorities, then get
really good at matching the solutions we build for them
or offer them to that need. It's difficult, and particularly
(11:27):
in times of inflection as you pointed to some of
the big ones at the beginning when you opened up.
You know, you talked about the Internet era, the mobile era,
the cloud era, and now the AI era, and they
are four big periods of change in the industry. Now,
how do we do it? Look the difficult bit And
I'm sure if you talk to any of the team
and data comm they'd say they see this in me.
(11:49):
You can't ever stand still, and you've got to be
willing to relentlessly pursue what you need to be, not
what you are now. And that's really uncomfortable. Being willing
to change the place quickly when technology revolutions land, being
willing to really try and work out out of all
of the selling that's been fired by global platforms. Being
(12:11):
able to choose what's best for a customer in any
given circumstances. How to do it economically is actually a
really complicated skill because you've got to get past the
marketing and the hyperse You've actually got to figure out
how can I reliably and repetitively do this across our
customer base.
Speaker 1 (12:25):
Will you put your finger on what I think is
one of the real reasons for data Comm's successes your independence.
I mean, there are tech companies in New Zealand that
they are Microsoft integrators. They live and breathe Microsoft. You know,
they're a partner for you, but you partner with Google Aws,
you know everyone. So when a government department, you know,
(12:45):
looking for a high trust tech partner, goes to datacom,
you're not going to be pushing one particular vendor. You're
going to be looking at what the customer needs and
then finding the best technology to suit it. So I
think it's that tech agnostic sort of approach that underpend
your growth.
Speaker 3 (13:01):
I think it's a really good summary of it, and
actually one of the really interesting things. And you know,
if you've been following the AI revolution closely, you'll see
that the frontier companies in the AI space aren't by
and large your traditional players, and so you've actually got
a really interesting situation where new organizations are rising all
(13:22):
of the big established players in the market with out
of brand names that you know in technology that every
house just about every household knows, are all suddenly in
an immediate furious set of investment in R and D
in activity in order to figure out how to catch
up with this revolution that's just sort of turned up
over the last couple of years. You've got frontier companies
like open Ai and Anthropic, and i'd put probably Google's
(13:46):
Gemini in that max as well. You've got the rise
of video and a MD as part of that conversation too.
This is something that every commercial organization does need to
understand because when you're in one of these disruptive periods
where your traditional players may or may not have the answer,
you've then got to go searching for the best solution
(14:08):
for the problem. And you've got to be really careful
to mix engineering understanding and commercial understanding and cost effectiveness
understanding to ensure that you get the upcomes you need.
And so we think in times like this, firstly, we
need to learn like crazy and really inform ourselves. Secondly,
we need to sort of be customer zero of the change.
And third we then need to figure out how to
(14:29):
distill some learnings of that. And there's a ton we
could talk about in this in that space out to
our customer base to help them make wise decisions. On
the other side of it.
Speaker 1 (14:36):
You and your team, you senior leadership team, have obviously
been looking at the growth of AI and that the
error of generative AI large language models in particular, and
so if we're to best grab this opportunity, we're going
to have to change how we do things. So talk
us through this sort of restructure you've done recently and
how that's positioning you for the era of AI.
Speaker 3 (14:58):
The senior leadership team is comprised of the heads of
our two primary markets, which are New Zealand and Australia,
and they're the folk who are in the face of
our customers, ensuring that we choose the right things that
we do to put in front of them, ensuring that
the work we do in that country is meaningful for
the country, which is a core thing that we want
(15:19):
to achieve, and then our capabilities that in essence, the
things we're good at. We made sure operated on across
everything data Com does basis rather than could be one
country or the other country, because we felt that the
customer demand was put the best of what you can
do in front of us every time, regardless of which
city had happened to be in, and so we needed
(15:40):
to organize differently in order to achieve that. And then
where it is helping us is we obviously need to
iterate and change what we do in the market to
reflect all of the capabilities that general who of AI
have added into the kitbag. We're able to do that
far more productively and effectively having made the shift in
how we operate. There's no doubt that by the time
(16:00):
you add it to the kind of machine learning capability
and image recognition and speech recognition and all the other
pieces of the AI equation offers the opportunity for anything
that involves document handling or words to be done very differently.
And while some perhaps of the uses in the early
era of it, which are you know, I'm going to
chat with a GENAI system obvious, the latter stages of
(16:24):
it are happening, the next couple of stages of it
are happening really quickly and hot off the heels of
this the building of expert systems, And one of our
examples is we've built a going back to that payroll
use case. It's not all we do, but it's worth,
it's really worth coming back to this one. So the
team there have built payroll advisor that can pass the
Payroll Practitioners exam. To build that, what you need to
(16:45):
do is you need to do a couple of things
on top of what you'd get out of one of
the large language models. The first thing you need to
do is you need to build guardrails or safeties in there,
because large language models are essentially just too helpful. If
they don't know the answer to it, I'll have a go.
And in an area like Bayrol, you can't have that
kind of guessing or estimation. What you've got to do
(17:06):
is give a definitive answer like an expert would, which
is I'm sorry, I don't know the answer to that,
or only a little bit more information for you in
order to get to an answer rather than guess. And
so actually putting those safeties around it is an engineering task.
And then the second thing is really curating the data
that the answers are given from it, and a payroll example,
that would be we want the core information from the
(17:26):
agencies involved and not the case law because the case
law is non determinative, and out of that you then
get a simple answer, which is you know, to be
absolutely accurate. We're rolling that into in fact this month
that's gone into general availability in our customer base. Will
now be able to ask it any questions it wants to.
And then you've got the power of tools like generative AI,
where if you want it, for example, to answer rewards questions,
(17:48):
you feed at the awards and it can give you
the answer the next day, which in any kind of
older version of artificial intelligence would have been a really
difficult thing to do. You can imagine an insurance in
law and medicine in all of these spaces. This expert
era is going to completely change what it is. You know,
if you're going out there, you can go out there
better informed, what's a whole of a decision support and
(18:09):
a lot of the things that would have taken just
lots of document reading and trying to findances and achieve
them really quickly.
Speaker 1 (18:15):
The productivity boost that we've been promised from AI. Once
these systems are used generally across the public sector in
businesses as well, we will start to see that. But
one area I know that you're working on as well.
I've been talking to some of your engineers just about
how AI is transforming the software development process. You're developing
(18:36):
applications and platforms for companies all the time, and a
particular problem we have, we've been a bit slow and
using some of our organizations to modernize their old systems.
But to be able to use AI agents actually for
the software development process, from the actual coding, to project management,
to business analysis to testing. That is massively improving and
(19:02):
making more efficient the software development process, isn't it.
Speaker 3 (19:05):
Yeah, it is. And you know, it's one of the
big ironies that the change in roles in changing what
you do in your day to day job is possibly
greatest in the tech sector. As a consequence of it,
large language models are really good at engineering problems. A
lot of other organizations are going about improving the productivity
of their programming in a subtly different way, which is
(19:26):
giving some of the new generation of tools that enable
generation of code to their developers. What this is doing
is it's moving further back up the software development life cycle.
It is rethinking how you produce the original analysis of
the system to be built, and then go heavily agent
oriented for all the rest of the software development life cycle.
Old systems. If you have the code available to them,
(19:48):
that code forms the blueprint for what the new system
needs to do when you want to replace it. Obviously
there'll be a point where you want to enhance it
and do a whole lot of modern things, But most
of these older systems are locked up in a generation
of code that either there's not a lot a skill
around it, or you wouldn't want to build and invest
a whole lot in anyway because it's not so useful.
The approach that we're talking about came out of an
opportunity that we have in government in Australia, and the
(20:11):
team built business analysis agents that go across all of
the old code and produce the blueprint or the documentation
for what the new system needs to do based on
the old code. So the original draft documentation that had
been produced by the customer represented five years worth of
business analysis time. The agents were able to do that
literally in hours by going through this old kind of
(20:33):
nineties era thirteen hundred form client service system that you
wouldn't put a lot of investment into in current times.
Once that blueprint or documentation was available, the team then
let loose team lead and development agents one of the
moments where what generator of AI and agents in particular
(20:54):
can do kind of knock me back in my seat
for the first time in ages in the tech world,
was watching a canbam board of these agents just tearing
through what would have been months and months and months
and months with a deb effort. And then watching the
agents talking to each other and celebrating when they did
a release and congratulating each other for the progress they've
been making and interacting like a highly automated and highly
(21:16):
efficient team would, And then seeing the test documentation generated
and test agents be able to parallel test old system
news system to be able to determine that the new
system was delivering the outcomes that needed to wow. Our
estimate is that in that particular project, we haved our
original estimate for the dev that needed to be done,
and over seventy percent of the code of the new
system was built by agents. Now loads of organizations all
(21:40):
the way around the world have these older systems that
replacing is a very, very, very risky task because you know,
humans going through all the old code and exactly getting
the modern system right isn't as accurate, unfortunately as actually
highly automated approaches to it like this.
Speaker 1 (21:55):
If you've got a green screen system, like running a
bank is something they just don't want it to because
it's rock solid.
Speaker 3 (22:02):
There are just examples absolutely everywhere you lock in the
commercial sector and government right the way across the spectrum.
In order to get that productivity, it wasn't about taking
one role in that equation, which is like a developer,
and making it more productive. It was about rethinking the
entire process to be able to automate the end to
(22:23):
end process, and so for every other process that exists
in business where you think that a large language model
or an AI agent or tool can make a difference,
what you need to do is do the proof work
to prove how much of the work can be done
via the automation, and then completely rethink the process. Not
(22:43):
just think the individual role, but actually think about how
that entire process will work differently in your company. And
it could be legal drafting, it could be giving advice
to a patient, or perhaps if you wanted to be
really disruptive, think fully about all the system that happens
in a ward at a hospital. And I'm sure will
be parts of the world that are thinking about how
to revolutionize that because it's a worldwide challenge. There are
(23:04):
dozens of examples in any kind of space that deals
heavily with contracts, that deals heavily worth legislation, where you've
got large amounts of language that are very time consuming
for people to go through and think about a different outcome.
And obviously these tools can produce high quality the challenge
with this, and I'll use the self driving car example
(23:26):
as a you know, we've been promised self driving cars
for ages now, right, you know, the revolution where cars
would be driving themselves every year without people having to
keep their hands on the wheel. I'm sure it's more
than a decade old that the conversation about that started.
The reason it's taken such a long time for it
to arrive is not because of the day to day
but because of what happens in harder to product circumstances
(23:49):
and testing their systems. In the old world of programming,
you would test for all of the conditions and things
that happen in a very follow up process kind of way.
In this the amount of data that you feed an
expert system and AI system, be it for self driving
cars or be it for large language models, means that
you've got to have. The more mission critical, the more
(24:10):
it deals with lives. And you could treat employment decisions
as an example of dealing with lives quality employment decisions
about who you might hire or who you might not,
decisions about how medical care is, dispense decisions about outcomes
in court. But how you test the system to ensure
that in the unusual cases or the edge cases, which
(24:31):
we call it a programming world, the right outcome has
arrived at is incredibly complicated. So even though it looks
like you can get ninety nine percent of the time
a really expert answer to a problem by using a
system like this, how you test that to assure that
you get that in the most important situations is very difficult.
That's the challenge with rolling their systems out into mission
(24:52):
critical use cases.
Speaker 1 (24:53):
Well that's why healthcare, insurance, public sector, putting this into
tax system or social welfare that there's hesitancy, and we've
been relatively slow to accelerate that because it's those one
percent of cases, if they go wrong, are really going
to erode trust in AI systems. So you can understand
(25:13):
why we're seeing AI agents pop up for various use
cases but not others.
Speaker 3 (25:18):
Yet, if perhaps some of the effort that was going
into coding is requiring less effort, how you test the
output and the system, on the other hand, is becoming
even more important in this era. And then when you
get to a situation and we can already see these
emerging where multiple agents are talking to each other and
producing group outcomes, you've then got an even more complicated
(25:39):
set of circumstances to test for because they might not
even come from the same organization. So while it has
reduced some of the effort in coding, testing for the
outcomes and ensuring that you do the design piece right
has suddenly become more important. Yeah.
Speaker 1 (25:53):
Well, I wanted to ask you about that because the
obvious question is if you're able to automate seventy percent
of the code development for an new platform or app,
what does that mean for your software development workforce? But
I think you've asked there. It's not necessarily reducing that workforce,
but they're going to be doing different types of roles.
Speaker 3 (26:10):
Yeah, and I think you're going to see this. My
role has changed on me absolutely right the way across
the spectrum of adoption of generator b AI. If we
go one step further. What's going to happen to employment.
That's the hard question. Will it completely change the landscape
of what happens in the technology world. Yes. Is there
more technology work to do and more work to do
to implement these kind of systems for companies? Yes. Will
(26:31):
it mean a reduction in the tech sector at the moment,
I think that's highly unlikely. I think the need for
this kind of productivity is going to be critical to
be globally competitive. What it means for broader employment is
a more complicated one. And you know, economists have been
writing about this actually since pre the GENAIRA, since the
AI era. They don't agree. I don't agree in the slightest.
(26:52):
There are some that predict mass in equity and a
hollowing out of some job types, and certainly you can
see some of that trend at the moment. We've been
through big revolutions like this before the Industrial Revolution and
other ones, and new kinds of jobs emerged and new
kinds of opportunity emerged out the other side of it,
(27:12):
and then opportunity rebalanced in and around it. So at
the moment, I'd prefer to stay on the side of
tech optimist in that as long as everybody stays focused
on the beneficial applications of it, then I think will
make progress. But the short term impact could be quite dramatic,
and it's certainly it's going to challenge what people learn
(27:35):
in school and learn and university. I know the university
is a hugely challenged by this just from evaluating student performance.
How can you tell what's AI written and what student
It's getting increasingly difficult what people must be learning to
use these alongside their university learning in order to be
productive in the workplace using this kind of augmentation. And
it's changing really rapidly, and so quite what it means
(27:58):
for the future is very very difficult to protect them.
I prefer an economist or a sociologist or somebody like
that was interviewed rather than I tried to guess that.
On the other hand, I do think that we can't
avoid the fact that all of the global organizations that
are competing in our market, are competing for work, are
adopting it, and therefore we must evolve in order to
(28:19):
be competitive in that global stage.
Speaker 1 (28:22):
We do have a productivity problem. It's very competitive. We
do need to adopt AI. We've just seen the government's
AI strategy for the country. We were the last in
the OECD to publish one. It's had a fair bit
of criticism. But as the leader of one of the
biggest IT company in New Zealand, you think we've got
the settings right. Is there enough urgency and momentum in
(28:44):
what the government has outlined to sort of get us here,
to get us those productivity gains and get AI competently
and ethically used across New Zealand.
Speaker 3 (28:56):
I think asking government to do this by itself as
opposed to in cooperation with industry is you know, it's
a difficult ask. Government has its own set of priorities
where it will be focusing on where each of the
agencies that's in government can effectively adopt adopt the technology.
(29:17):
What I would what I do think it will be
worth us touching on is actually the infrastructure needed to
support AI, because that's the bit of the Like everybody's
focusing on the ethical conversations. I think there are governments
around the work that there are a lot of government
settings that have been put out of that, ranging from
you know the US UK examples through to the EU examples,
(29:37):
through to the work that Australia is doing and some
of the ethical pieces that need to be tackled there.
But let's talk about the infrastructure that underpends AI and
the huge consequences of that. Where New Zealand has some
distinct advantages and it has some really important decisions that
it's going to have to make going forward. So we've
(29:59):
to talked about the large language models. What we haven't
talked about is the infrastructure needed in order to produce
those outcomes. And so AI is the GENAI answers rely
on two things happening. They rely on the large language
models being trained. So that is a very very very
(30:21):
compute intensor process that uses GPUs rather than the old
CPU world of the computer world. That's why you've seen
in Video become one of the most valuable companies in
the world. And you've seen AMD put a really strong
kind of parallel response alongside that about chipset capability that
it's got in the US. That means that the data
(30:42):
center market has been lit on fire, as has that
of providing GPUs to support both training to the two pieces,
there's the training of the large language model, that inference,
which is a jargon term. I'm going to use because
I won't be able to help myself. Is when you
send a query to a large language model and get
an answer back. Now, to train, I think it was
(31:03):
GPT three point five two point eight million hours of
GPU time per training run. You've got to think cards
sitting in a server and two point eight million hours
worth of one of these expensive Nvidia cards with all
the heat and power associated with that. So in the
US they're down to less than four percent data center
(31:24):
space available. They're firing up old nuclear reactors in order
to be able to fuel the demand for training activity.
The inference side of it is still a big demand,
and it's rapidly accelerating, and only a few parts of
the world are able to take those advanced chipsets because
of the legal restrictions that have been put in place
(31:46):
between the US and China. Now, the interesting part for
where we are is Japan, South Korea, Australia and New
Zealand are the only countries that can actually receive these
chipsets this part of the world. So all of the
inference that is done for all of our part of
(32:07):
the world that isn't going to go to the US
can only land in one of those countries, but there
needs to be sufficient power to fuel that, so there's
going to be potentially a huge crunch on power. You
can already see it starting to turn up in some
parts of Australia. Long term forward planning is needed to
ensure the powers and of course we've got this huge
(32:28):
advantage in New Zealand of renewable power if we invest
enough in the infrastructure and the transmission in order to
be able to do it. But we don't have a
lot of that processing happening here at most of what
you do when you interact with one of these products,
we'll go to another part of the world to be processed.
So do we need the infrastructure for it here? I'd
say from a resilience perspective, we do is the forward
(32:48):
investment in power and data center infrastructure and GPU deployment
can happen to do it? That's very uncertain and it's
not being discussed enough as everybody's focusing further up the
stack on how to make their individual organization or productive.
So I think, I think how we have the and
then of course you know we well, there's a lot
(33:09):
of other places we could go from there. I won't
I won't go down too many rabbit holes. But I
don't hear that being talked about anywhere. No, it's not
just sort of this assumption that it will be there
and it won't because it's it's as you know, hundreds
of millions or billions worth of investment needed in order
to support that kind of process and capability.
Speaker 1 (33:27):
Yeah, and is it I mean, Datacom is investing in
these GPUs. You know, you've got extensive data center infrastructure,
so you're upgrading it for for the A centric applications.
I mean, is it completely unrealistic for us to ever
as a nation develop our own LLM or at least
(33:48):
adapt and open source LLM. The Swiss have just created one,
spent a lot of money developing it, they've open sourced
it under an Apache license. So we could we take that,
train it or adapt it for our uses at reasonable
cost or is it just prohibitively expensive.
Speaker 3 (34:08):
It's probably worth talking about what's happening in other parts
of the world. It's a really really interesting question because
there are multiple stages in what you do when you
get an LM to produce answers. Obviously, you know there's
a need for an LIM that can speak today in
New Zealand, and I know there's lots of pockets of
(34:29):
activity happening there. I'm positive that all the pockets of
activity need to cooperate for that outcome because it will
be one that will require tremendous investment and effort in
order to deliver the outcomes. It is much more likely
that we will do a lot of fine tuning of
a large language model, which is a stage that happens
after training, because investing in the training also requires massive
(34:52):
amounts of data in order to do it in order
to make the LM able to deal with more difficult questions.
So the fine tuning piece is in essence, selecting where
it is expert and what information it relies on, and
waiting the answers. So some parts of the world that
perhaps have different views on history are doing it at
(35:15):
the fine tuning stage in order to say that in
our country you want history to be represented this way
when the large language model because it's answer and you
then get into a whole bunch of really really really
interesting questions that you could you know, if you get
some of the greatest experts of this in the world
around the table. If you think about the interaction between
(35:35):
language and worldview. Is the fact that the majority of
these are trained in English first going to result in
a limited worldview. Can you deal with that at fine
tuning level? Which version of history, what answers is it
going to give to critical questions of history or belief?
(35:57):
Is that intrinsic to the training that it's received, or
can that be dealt with at fine tuning. But there's
no doubt in my mind that the place that young
people go to when they're at high school or university
is going to become large language models very rapidly instead
of the more traditional search world. And so actually it's
really important to understand what biases the language models have
(36:19):
when they answer those questions. When people delve into looking
for help in their life, will they ask at large
language model that question? Is it safe to give answers
to that? What will it say about the history of Altro?
What will it say about the history of other countries
in the world. And we need to take a view
on that. And so I don't see that debate happening
enough either. And I think, you know, we're drifting towards
(36:43):
what does it mean. I don't think we'll have to
train them from scratch in order to do it, and
some of the open source ones are now immensely capable,
as is the ability to fine tune the larger ones.
How far do you want to go into this?
Speaker 1 (36:53):
So clearly you know what you're saying is the focus
on infrastructure. We've got to get that right, not just
the data seene infrastructure, and it's great to see more
a lot of investment going into that space in New Zealand,
but the power infrastructure, the ethics and the fine tuning
of this how we go about that's really important. But
ultimately we can to have some sovereignty I guess over
(37:15):
AIO or be able to steer our destiny rather than
just rely on open AI and anthropic to provide these great,
large language models that don't necessarily represent our worldview or history.
I guess we've got to get really deliberate about that
as a nation, don't we. And that's maybe where the
government can play a role in bringing private sector, academia
and the government together to do that.
Speaker 3 (37:35):
Yeah, it can, and look, different governments around the world
have approached it in loads of different ways. We are
a small country with only five million odd people in
it and a very finite amount of money that we
can spend, so we have to be smart. We probably
need to choose very carefully who were tightly allied with
(37:56):
and where we can cooperate and not reinvent stuff that
we don't need to reinvent, so that we can really
focus on the core of what making progress looks like.
And different governments have cooperated public private very very differently
in different ways in order to move that forward. You know,
if you look at very large, wealthy countries, what they
are able to invest in maybe different to what we
can afford to do here in New Zealand. So the
(38:17):
decisions we make dealing with scarce capacity to invest in
R and D have to be really smart and really
focused on the best outcomes for New Zealand and the
best outcomes for the people here.
Speaker 1 (38:27):
Just finally, Greg eighteen years as CEO of data Com
expansion across the Tasman employing over five thousand people. How
do you stay motivated inspired to lead a big company,
a huge team, with such change going on? How do
you keep fresh in that role nearly two decades into it.
Speaker 3 (38:47):
I'm the child of two Tetchers, right, so I was
brought up on learning and brought up encouraged. I was
really lucky I was in an environment I was encouraged
to be curious about things. That curiosity is still there,
that interest in I started building Lego when I was
very little, and then found computers and found that what
you could build on those was whatever I could imagine
(39:07):
to some extent or other. And so that interest in
technology and how it applies there and then if I'm
going to add a second piece to it, which is
sort of a sense of responsibility I touched very earlier,
much earlier in our conversation on what a unique position
in the New Zealand and Australian market and the fact
that we need voices that worry about how technology is
(39:29):
implemented in our part of the world. And so I
think we need organizations that are independent and don't just
result in this displacement of everything that is spent in
New Zealand to global forces, and instead we need organizations
that are based in this part of the world to
have a voice that ensure that New Zealand's and Australia's
(39:50):
adoption of technology goes in a way that's good for
the country we live in and the part of.
Speaker 1 (39:55):
The world we work and it's been a very successful
formula and sucty. Congratulations on that massive milestone and here's
too many more decades fulfilling that role in New Zealand.
Thanks so much for coming on the Business of Tech.
Speaker 3 (40:09):
Thanks so much better, really nice to see you, and
thanks for conversation.
Speaker 1 (40:15):
That's all for this edition of the Business of Tech.
My thanks to Greg Davidson for joining me on the
show and sharing his perspective on Datacom's remarkable sixty year
journey and what the future holds. The thing that really
strikes me about Datacom is what Greg said about John Holsworth,
the early shareholder in Datacom who went on to be
(40:36):
its executive chairman of the company for around twenty years
from nineteen ninety and whose family are still a major
shareholder in the company.
Speaker 2 (40:45):
It was really that relentless.
Speaker 1 (40:47):
Focus on what the customer needed, rather than embracing tech
fads of the day, that kept Datacom competitive as other
players emerged in IT services in New Zealand.
Speaker 2 (41:00):
Greg made pretty clear.
Speaker 1 (41:01):
That Datacom's success has always come from that willingness to
listen to customers but also remaining independent in a field
of global giants. Datacom isn't the flashiest company around. It
doesn't like talking about itself particularly, but it's kept loyal
customers throughout the decades and kept the trust in its
(41:25):
products and services, which has really paid off in terms
of its growth and its reach in the region. If
AI is going to boost our national productivity, which we
really need, we actually need Datacom to really get it right,
given that it's such a big player in the IT
services market here.
Speaker 2 (41:43):
So it was really interesting to hear Greg.
Speaker 1 (41:45):
Talk about the approach to AI and how Datacom is
changing itself to embrace this new technology. If you liked
the episode of The Business of Tech, leave a review
and share the podcast. It's on iHeartRadio and in your
favorite pod cast app. Check out the show notes in
my weekly tech reading list over at Businessdesk dot co
(42:05):
dot Nz.
Speaker 2 (42:06):
You'll find them in the podcast section.
Speaker 1 (42:09):
Next week, I catch you up with a Kiwi xpat
who is one of the leaders of Britain's efforts to
build quantum computers. He tells me what the tech has
in store for us and just where we are in
the quantum race. That's next Thursday.
Speaker 2 (42:23):
I'll catch you then