All Episodes

November 19, 2025 42 mins

Reporting from the front line of the artificial intelligence revolution, Time magazine reporter Harry Booth has a unique perspective on the technology moving markets and transforming business.

The London-based University of Auckland graduate has been part of Time’s team of AI reporters for the last 16 months, his byline regularly appearing in the pages of the iconic news magazine. 

This week’s episode of The Business of Tech podcast features an in-depth conversation with Booth, who gave me a tour of how AI is reshaping the world of work, explained the technology’s breakneck pace of development, looming questions over its energy use, and the critical signals to watch as 2026 approaches.

Is AI really cleaning up?

Despite dire predictions that white-collar jobs would be decimated, Booth finds reality to be more complex and, in some ways, more sobering. In areas like translation, seasoned professionals aren’t being replaced outright. Instead, their roles have shifted. 

Translators Booth interviewed are now tasked with correcting AI-generated text – a role rebranded as “AI cleanup” – which brings downward pressure on rates without necessarily delivering true productivity gains. Surprisingly, fixing flawed machine translation can take as long as translating from scratch, eroding job satisfaction and earnings for skilled workers.​

The same story, Booth notes, is playing out in other “canary in the coal mine” sectors. A frequently cited study found that software engineers using AI coding assistants believed their workload to be 20% faster. But empirical measurement showed a 20% slowdown. This suggests productivity impacts are far from settled, with AI often under-delivering unless carefully tailored to fit the workflow.​

From assistants to agents

Much has been made in the past year of the rise of “AI agents” – systems that operate independently and can execute multi-step tasks, not just answer queries. 

“We’re seeing the emergence of agentic AI — these aren’t just chatbots, but systems that can carry out tasks, fetch data, and increasingly do things in the world on our behalf,” Booth told me.

He believes we’re still in the early innings. Some AI can now complete longer software engineering tasks. The length of time an AI system can work independently has roughly doubled every four to seven months. If that trend holds, Booth suggests we could see agents capable of a full workday by 2027. 

However, today’s agents remain far from being true digital employees. Meaningful productivity gains only appear when companies design AI tools that address specific, high-value pain points using both language models and smart software engineering.​

Energy, infrastructure, and the next bottleneck

On the infrastructure side, AI’s growing thirst for energy is emerging as a defining challenge. Far from being a personal moral issue (a single AI prompt’s carbon footprint is tiny, Booth points out), energy is a strategic concern for the giants racing to train ever-larger models. 

“AI isn’t a climate disaster at the individual level, but as companies multiply their data centres, the real bottleneck for development is shifting – from talent and chips to energy itself,” he said.

With global electricity production growing slowly and massive datacenter builds underway, companies are securing long-term energy deals – sometimes using the rhetoric of AI’s needs as justification for keeping older, dirtier power sources online.​

But Booth also highlights a surprising upside: the same AI giants are pouring fresh capital into clean-energy tech, particularly nuclear fusion. Projects previously imagined as decades away are suddenly within striking distance. Fusion investment has exploded from US$2 billion to $15 billion in just three years, with players like OpenAI, Google, and Softbank on board. New Zealand’s own OpenStar is part of this story, pursuing commercial fusion with techniques borrowed from the scrappy world of startups. 

While a fusion-powered data centre is still years away, the influx of funding is credibly accelerating commercial viability, with some experts predicting net-positive fusion within a decade.​

What Harry Booth is watching in 2026

As AI accelerates, Booth will keep his investigative lens focused on several fronts in 2026:

Will the time horizon, how long an AI agent can independently operate, keep doubling at today’s pace?

How will new training techniques, like direct observation of professionals and ever-more-complex simulation environments, impact AI capability?

Will

Mark as Played
Transcript

Episode Transcript

Available transcripts are automatically generated. Complete accuracy is not guaranteed.
Speaker 1 (00:02):
Welcome back to the Business of Tech powered by two
Degrees Business. I'm your host, Peter Griffin, and I've been
on the road recently, as you may have heard a
couple of episodes. Back Barcelona for that mobile phone launch,
Dublin to catch up with family, and London to meet
a group of New Zealand tech entrepreneurs who are building
a presence for their startups in the UK market. You'll

(00:24):
hear from some of them over the next couple of weeks.
But on this episode of the Business of Tech, we're
joined by Harry Booth, a London based reporter for Time Magazine.
He's one of the team covering artificial intelligence for the
leading weekly magazine. Harry studied at the University of Auckland
before doing a stint at Auckland hardware startup Osin before

(00:47):
taking up the Tarbell Fellowship, a year long program for
journalists from all over the world interested in covering artificial intelligence.
What a cool opportunity that led to a placement at
Time Magazine, which is known for its distinctive red bordered
cover and influential lists, including the annual list of the

(01:07):
most influential people in the world of Artificial Intelligence. Over
the last couple of years, Harry has been immersed in
what matters most about AI, the rapid evolution of large
language models, the so called AI agent revolution, and the
big questions around AI's impact on jobs and productivity. As

(01:28):
businesses quietly adjust their hiring rather than make dramatic AI
driven layoffs, Harry investigates the true effects of AI on
white collar work. He's also closely watching the broader risks
and policy responses swirling around AI, from growing debate over
the energy consumption of powerful AI systems and a scrutiny
of open AI's business model, to the major policy moves

(01:51):
unfolding both in Europe and the US. His journalism highlights
the balance between the rapid progress we're seeing, safety concerns
and questions about who really benefits is AI systems get
smarter and potentially riskier. Anyway, let's hear from Harry Booth,
who welcomed me into as Great Little Bad in Hoxton,
London a couple of weeks back. Harry Booth, Welcome to

(02:18):
the business of tech. Thanks for having me look at
it has been pretty incredible year. That's sort of the
year or sixteen months or so that you've been an
AI reporter at time. Just I mean, it's the year
really of AI agents. For instance, we've seen massive growth
and evaluation of open AI. They're talking potentially about a

(02:39):
trillion dollar listing at some point. Whether they're making money
is another question. But maybe if we start to get
your insights into one issue you've been writing about, I've
been writing about and really it's hard to find agreement
on AI and white collar work. It's supposed to be
coming for a lot of those industry is that that

(03:00):
have a lot of administrative features, a lot.

Speaker 2 (03:04):
Of writing, marketing, that sort of thing.

Speaker 1 (03:08):
What's when you talk to people actually in businesses that
employ a lot of white collar workers, what's the feedback
you're getting. Is it having an impact yet?

Speaker 3 (03:16):
Yeah, it's one of these things where it's quite hard
to quantify the impact. But what I want to tell
you about is this particular line of reporting that I
did recently where I spoke to a number of freelance
translators and my theory, my sort of hypothesis going in
was translators will be like a Canarian the coal mine, right.

(03:39):
Google Translate got pretty good around twenty seventeen. Around that time,
deep l came out. You know, that's a good for
five years before chat GBT, so I figured, okay, translators
will kind of give us a glimpse into the future
of other white collar professions. And yeah, my hypothesis going
in was that probably getting done over by AI and

(04:04):
having jobs automated. The truth was sort of more complex
and in some ways more depressing. Some of these folks
who you know, are real experts in their field. These
are people who aren't just translating Instagram captions, you know.
These are folks who are translating the manuals for off

(04:25):
short oil rigs or cleaning devices that go into nuclear
power plants, high stakes demands. They are being given AI
translated texts, and their expertise has been repriced as an
AI cleanup service. The way it works in this industry
is you get paid by the number of words you translate,

(04:46):
maybe similar to freelized journalism, right, and their rate to
correct it what's called a machine translation is about half
the rate per word that it would be to translate
from scratch. Now that would equal out if you could translate,
if you could correct a machine translation twice as fast.

(05:07):
But what every freelance translator ISO told me is that
it takes them about the same amount of time, maybe
a bit longer because mistakes of the machine translation still
make are so subtle but profound that you have to
take time to read the entire text and then correct it.
And this is a really time consuming process. So what

(05:27):
is the takeaway from all this. It's that I'm not
seeing AI completely replace why collar jobs. It's augmenting them,
but it often isn't done in a way where it
actually helps. And I'm not even convinced this is always
an AI problem. I think a lot of the time

(05:49):
this is a software engineering problem. It wouldn't surprise me
if you could get much better translations out of the AI,
if instead of just sending the machine translation to the
translation expert for them to correct, you give them the
translation tools and you build a tool that allows you
to bring in the context around whatever the text you're

(06:11):
working on is, whether it's a sort of offshore oil
rabel or such. But yeah, we've seen similar things in
other industries. You know. There was this widely cited but
admittedly small study, preliminary study from a group in Berkeley
called Meta Model Evaluation and Threat Research. So they had

(06:33):
a small sample of sixteen experienced software engineers using coding tools,
and the engineers estimated that the coding tools had sped
their productivity up there coding up by about twenty percent.
What they found empirically was the coding tools actually slowed
them down by twenty percent. So there's kind of this

(06:55):
gap between between expectations and the reality of how much
these tools are making workers more productive. That is sort
of maybe neutral when the tools are in the hands
of the experts in the case of coding, when the
tools are kind of being forced on people like the translators,

(07:16):
I think this quoickly ends up being quite negative because
the workers get crushed between these are really lofty expectations
and the reality of where the technology is.

Speaker 1 (07:26):
What you're seeing is what I'm seeing in New Zealand
as well, which is you're not going into a big
corporates like one end Zed, the telecoms provider, or banks
or insurance companies and hearing, oh, yeah, we've just laid
off fifty people because AI can do it. We're not
hearing that. What we are hearing is we're not hiring

(07:47):
as many people. We don't need to take on as
many graduates. And that's a problem for our workforce, which
relates to the reasons to leave is suddenly you're not
getting a great internship and then leading on to an
entry level job for a couple of years where you're
doing sort of admin stuff and in moving up the ranks.
AI is sort of taking care of some of that.
But we're not hearing about mid to senior level people

(08:10):
being disestablished because of AI.

Speaker 2 (08:12):
Are we No?

Speaker 3 (08:13):
I don't think we are. And I think it's easy
to latch onto these examples where AI maybe hasn't met expectations.
While there's a lot of people who want to hype
up AI and maybe tell you that it's better than
it is right now, I think there's almost as many
people who want to tell you it's not good and
never going to be that good because they don't want

(08:33):
to think through the implications of what that might mean.
But there's one international law firm based here in London
that I spoke with. They were working with a large
US bank to enter the EU market, and this bank
had two four hundred licensing agreements and going through that

(08:56):
number of licensing agreements to check whether it needs to
be adapted or amended for EU law would take they,
as they told me, you know, an army of paralegals
just sitting in a room basically just flecking through these documents.
But instead what they did was in house built their
own tool, which is using the language models but with

(09:19):
software engineering scaffolding around the models. And what they was
able to do is whistle down the twenty four hundred
agreements to just the sort of five six, seven hundred
agreements that sort of required human review. Just by doing that,
they were able to half the cost to the client.

(09:40):
They were able to take on this project that they
wouldn't have been able to take on without hiring more paralegals,
which in practice would have meant that they just would
have passed up on the job. You know, how do
you square these things of like, well, some people are
saying this is slowing them down, but some people are
saying this is essentially making them half as productive. And
I think the key thing there is that none of

(10:01):
the businesses I've spoke with who sort of seeing these
real gains have just sort of given all of their employees'
chat GPT pro subscriptions have at it and suddenly seen
productivity double. It's always quite considered cases where they are
taking the general purpose technology of these large language models,

(10:22):
but then they're doing some clever software engineering around it
to build a tool to solve a specific problem.

Speaker 1 (10:29):
Yeah, and you know, that is sort of the conversation
what it has evolved to this year around so called
AI agents, and I think it's become a grab bag
sort of term for agentic AI and you're seeing you
go onto a website now talk to our agents, you know,
So I don't know how that is any different from
a chatbot, but you know, the idea with agents is

(10:52):
that they have autonomy to do things on your behalf.
And it's sort of has gone from whatnated workflows you
can you can sort of do that and have been
able to do that for a while, to then bringing
in the large language model and the inference in that
process as well.

Speaker 2 (11:07):
But what's your take.

Speaker 1 (11:08):
On you're covering companies that are touting agentic services and
organizations that are using them.

Speaker 2 (11:16):
How far into this revolution really are we?

Speaker 3 (11:18):
I think we're really early on agents. But I also
think that it's just amazing to me how fast my
own expectations of technology rise with the tide. I mean,
just to come back to yeah, what agents are Rather
than just answering or prompt we're talking about things that
can execute actions in the world. If you were to

(11:41):
somehow go back and talk to chat GBT from twenty
twenty three, it really was just answering your questions. When
you talk to chat GBT, now it identifies your sort
of your intention as a user and then goes out
and searches the web for you and then brings that
information back into the chat. That's a really like simple
but still an agentic workflow. What we haven't seen is

(12:05):
the idea of this agent that is just like a
digital employee that you can kind of message through Slack
and email and just leave it to its own devices
and it goes in complete to workday. But if you
sort of think less about what's people's opinion on the
technology and actually just look at some of the trends. Again,

(12:28):
this organization i mentioned earlier in this conversation. Meter what
they've been doing is benchmarking the what's called the time
horizon of the length of tasks that an AI can complete. Now,
it's somewhat complicated metric, but basically it measures how long

(12:51):
a human takes to complete a particular task. Let's say
it's a software engineering task, and then it measures whether
an AI can complete that task. The longer the tasks
that a human can complete get, the more difficult they
tend to be for the AI, because if you make
one mistake early in the process, that can derail the

(13:13):
entire task. And so what we've seen is the length
of human tasks that an AI can complete has been
doubling every seven to four months for the last couple
of years. Now, that is a rate of progress that's
just kind of difficult to wrap your head around. But
if you just map that out, and if that trajectory continues,

(13:37):
which of course is not guaranteed, we should expect ais
to be able to sort of complete a full full
workday sometime. And I think twenty twenty seven, based on
the current process, agents are still nowhere near this moonshot
of a digital employee, but they have got a lot better,
and if the current rate of progress continues, we should

(14:00):
expect them to get vastly better than the pretty in
our future.

Speaker 1 (14:04):
But look, just moving on to another area you've been
covering is open AI. You know, the company at the
height of this revolution came up with chat GIPT and
released it in late twenty twenty two, and it's just
been full steam ahead since then. Bizarre to think a
couple of years ago, Sam Altman, for a couple of days,
was booted from his own company, and that really goes

(14:26):
to this tension between it. It's genesis, a not for
profit with a for profit enterprise built into it, and
I think, you know, some of the executives and Sam
thinking this is the most valuable business it's probably.

Speaker 2 (14:39):
Ever been created. Do we want to have it as
a not for profit?

Speaker 1 (14:43):
But there's been a lot of complicated legal structural changes
to open AI. From your reporting, where has it landed
and does it give us any certainty that the philosophy
that started the company do good, do artificial general intelligence,
but do it safely? You know, the guide rails still

(15:03):
here to allow that vision to come about.

Speaker 3 (15:06):
Yeah, I think this is something that I'm going to
be watching very closely over the next year. Just to
kind of go back a step. Open AI was founded
in twenty fifteen as a nonprofit organization with the purpose
of ensuring that AGI benefits all of humanity. AGI, for

(15:26):
anyone who doesn't know, is shorthand for Artificial general intelligence,
which is a system that matches or exceeds human intelligence
and most domains, or as open ai likes to define it,
a system that can complete most economically valuable tasks. And

(15:47):
so you know, this was sort of at the time
meant to be a counterweight to other AI players, like
your Googles of the world, that might be pursuing this technology,
which people in Silicon Valley all believe is going to
be incredibly powerful and transformative, and maybe you don't want

(16:08):
that technology built solely at the hand of commercial pressures. However,
open Eye's philosophy was to scale things, and what that
meant was, yes, using vastly more data to train these
AI systems than anyone had done before, but also to

(16:29):
train them using more computing chips than anyone had ever
done before. Those computing chips are really expensive, and so
by twenty nineteen it sort of became apparent that actually,
for open ai to fulfill its mission, the people on
the board there believe that they would need to raise

(16:51):
vastly more capital, and so what they decided to do
was do this kind of quite unique corporate structure where
they opened a for profit arm but it was what's
called a capped profit structure. They're basically told investors, get
in now and you can make up to one hundred

(17:13):
times return on your investment. Now, that would be a
wicked return, but you know, they believe genuinely that they're
going to automate most economically valuable work at some time
in the future, so it could be worth you know,
trillions of dollars. And the idea was that everything over
that one hundred x would go back to the nonprofit
for the benefit of humanity as a collective and sort

(17:37):
of the last year, open AI had made moves to ditch,
not entirely ditch the nonprofit, but currently the nonprofit still
in control, and the idea was to seed that nonprofit
control to a public benefit corporation. The nonprofit would just

(17:58):
sort of act more as like a charity ARM. So
it's kind of a shift in emphasis from a nonprofit
with a for profit ARM to a for profit with
a nonprofit ARM, and that received a lot of pushback
from former staff who felt like they were portraying the
mission and open now I got some advice from the
attorney generals of Delaware in California, where they're sort of

(18:19):
headquartered and where they're established legally, and so they sort
of went back on this plan to go to full
sort of corporate control. They've just completed their restructure sort
of a compromise where it does allow them to get
more investment, because again they're at this crossroads where they've

(18:40):
got over a trillion dollars in commitment to buy computing infrastructure.
They're going to need to get more investment. They felt
like the structure before wasn't conducive to that. You know,
this is still a compromise. A lot of the people
who were pushing back on their plan to give up

(19:00):
nonprofit control aren't necessarily super happy, but it was sort
of the new compromise. I think the main point of
contention is that the nonprofit board and the for profit board.
The nonprofit board is meant to be this like independent
entity that's ensuring that all parts of the organization, including

(19:21):
the for profit, are remaining true to this mission of
ensuring agi benefits or humanity. And we've got to remember
this is the nonprofit board that was able to fire
Sam Moltman for a week in November of twenty twenty two.
Even if just for a week, that's still a lot
of power to be able to remove a CEO like that.
Now under this new structure, the nonprofit board and the

(19:44):
for profit bord are essentially the same. I think there's
all but two members of the Open AI Board are
shared between the Public Benefit Corporation and the new Open
AI Foundation is the name for it. And so I
think what critics are saying is, you know, how independent
can a board really be if you're on if you're

(20:06):
sitting on both the corporate and the nonprofit board.

Speaker 1 (20:09):
Yeah, you know, it sort of raises a question is
it appropriate or is it the right place to deal
with with safety and what's in the public's interest. You've
been writing a lot about the safety arguments around AI,
and for instance, in New Zealand, we've basically gone very
light touch. We're not introducing any new legislation. The government

(20:30):
said we'll tweak the Privacy Act if need be, but
at the moment it's actually a very hands off regime
and there are critics of that in New Zealand. A
lot more complicated in the US. You've been covering, for instance,
in California, some proposed legislation there that would really embed
safety into the governance of AI.

Speaker 3 (20:50):
You know, when we're talking about safety. We're not necessarily
just talking about sort of a pr risk companies or
sort of privacy concerns. I think are all, you know,
very valid considerations with this technology. But some of the
most influential and sort of senior figures in this space

(21:11):
worry about much more extreme risks than that. So, you know,
one of the people I've spoken to quite recently is
Yoshu Abnio. He's the most cited scientists in the world.
He's known as one of the godfathers of AI because
research he did in the nineties and two thousands kind
of laid the groundwork for the way that we build

(21:33):
these AI systems. Now he's the chair of the International
AI Safety Report, which is this collaboration between thirty countries
to kind of provide a sort of IPCC type report
on the state of AI capabilities and risks. And you know,
some of the things he's worried about are things like

(21:54):
a future system being so smart that it can slip
out of the box and overpower humanity and we can
sort of never regain control, because it's like, if something
is really more intelligent than humanity, how do you outsmart it.
The other thing that folks like Benji or worry about.
Are these extreme risks on quite short timelines of FNAI

(22:17):
system is exceptionally good at coding, or exceptionally good at
say biology, could that empower a would be hacker, a
would be bioterrorists to design a piece of software or
a pathogen that could start the next pandemic that could
wipe the energy grid offline. These are hugely important risks,

(22:38):
and the current state of the science on this is
essentially experts are divided. Right, Some very credible people in
this space think that these are serious risks that require
immediate attention. There are other folks that are quite dismissive
of these risks and aren't persuad But certainly, if you

(23:01):
look at the empirical evidence, we've seen nothing that guarantees
that these risks will come to pass as AI systems
get bigger and smarter, But there are some early warning signs.
So some of the things I've reported on, for example,
is this case of paper from an organization called Parsaid Research,
which found that some of these new reasoning models, models

(23:24):
that are designed not just to answer your prompt but
to solve problems trained on mathematical puzzles and coding problems
show signs of deception. So they set up this test
where the AI was petted against a really powerful chess
spot in a game of chess. Essentially, the AI model
was destined to lose against the spot. And while the

(23:46):
older language models would just make random moves and then
lose the game of chess, these newer reasoning models, when
they were losing the game, they would write to themselves,
my task is to win, not necessarily win fair game.
And then what they would do is hack the file
that contains the virtual position of all the chess pieces

(24:07):
on the board and they would just illegally move all
their pieces so that the other side would forfeit it.

Speaker 2 (24:14):
Yeah, cheap, and they were cheap.

Speaker 3 (24:16):
And so this is something that has got experts worried
because they're like, hey, all these things that were theorized
for ten twenty years, it seems like we're looking at
the first evidence of this, and every time that we
build smarter models, this evidence seems to accumulate. And so
that brings us to what's happening in California. They just

(24:37):
passed this law called SB fifty three, and what that
does is requires large AI companies, not small startups for
a savor, companies using huge amounts of computing power to
train their models to run some safety tests to see, hey,
how good are these models at biology? How good are
they at coding? And then share that formation with the

(25:00):
public at time of release. And then what it also
does is empowers employees inside the company who are responsible
for measuring, you know, conducting those tests. If they feel
like these processes aren't being followed properly, they are legally
shielded to blow the whistle on their employers. Yeah, and
this has been a real point of contention in California.

Speaker 1 (25:24):
Yeah, that AI whistle blower provisions I think are hugely
valuable and part of good practice here you've covered here
in the UK, parliamentarians very upset with Deep Minds when

(25:46):
it released Gemini two point five pro no details of
safety testing.

Speaker 2 (25:50):
It was subsequently released.

Speaker 1 (25:52):
But this seems to be the way, you know, I
think of open AI and saw to. We've just seen
as proliferation of basically AI generated videos across social media platforms.
We're starting to see media commentary about do you know
what reality is anymore? When you see these very convincing videos,
I mean, what engagement is there with the public around

(26:15):
the safety aspects of these It seems to be a
race to release. And sure, some of the companies are
building in guide rails. They have an obligation to do
it responsibly, and they would claim they are doing so,
but it's really in terms of that social license with society.
It's released and then explain later totally.

Speaker 3 (26:35):
Yeah, I mean, I guess to give the AI companies credit,
most of them have been publishing what they call either
model cards or system cards.

Speaker 2 (26:45):
That's right.

Speaker 3 (26:45):
You can think of as like a nutritional label for
an AI system. It's basically a long and pretty dry
document that says, hey, these are the tests that we
did and these are the results that we got for
those types of risks that we spoke about, whether it's
loss or control coding, biological risk, chemical risk, radiological risks.

(27:09):
The problem is these companies are in an intense race,
right Google, at least some folks at Google felt like
they really missed the boat by actually doing a lot
of the fundamental research that led to large language models,
but then being a little hesitant to release it. This
new startup comes in open AI, releases it first, and

(27:32):
you know they capture the world's imagination. Now there's this
intense competition where it's just immense pressure to get things
out to the public first. And so what we've seen
in some cases is like the Google Deep minds Gemini
two point five pro we see the model comes out,
Google says it's in sort of testing or beta, by

(27:53):
anyone on the Internet can access it for free, and
then when they release the full version, which is functionally
the same, then the public gets the information on the
public on the safety testing or in the case of
El Musqu's XAI, I think we're still waiting for a
model card for grockform, which at the time of release

(28:15):
was the most bunced model in the world. Based on
these internal tests, it looks like we don't know for sure,
but it looks like we might be really close to
some sort of thresholds that you want to be very
careful about how you cross. So both open ai and
another AI company called Anthropic and their own testing have

(28:35):
found that they can no longer rule out the possibility
that their AI models could help a bioterrorist because they're
that good at biology. When you're that close to it
and you can't rule out the risk in your own tests.
I think you really want to be letting policy makers
in the public know about that and telling the public

(28:56):
what mitigations you've put in place.

Speaker 1 (28:58):
So that one is going to have to be grappled with.
The Europeans obviously have a risk based system with the
AI Act, so the more serious the consequences of the AI,
the more scrutiny is on out. You know, the US
is by no means not doing anything here. There is
state based stuff and even at federal level there.

Speaker 2 (29:17):
You know they want to have.

Speaker 1 (29:18):
More scrutiny of these big companies, but it's going to
shake out. The other issue you've been covering, which really
has come into focus this year is the energy issue
around AI. The fact that all of these companies are
sewing up deals with energy companies, including in New Zealand,
to run their data centers and have surety off energy

(29:39):
to power them. It's also led to a lot of
investments and interests in fusion energy, the technology that is
perpetually thirty five years away from becoming a reality. You've
talked to Ratumtira, the CEO of open Star in my
town and Wellington, where they're running a a sort of

(30:00):
a plasma reactor in the Narrowinger Gorge, which is pretty
crazy to think about. I'm not sure how many kiwis
actually know that's going on, but interesting your perspective on
that relationship between the massive growth of AI and this
sort of crunch that's coming around energy and then looking
to this sort of not necessarily untested but still non

(30:25):
viable commercially this technology fusion that they see as potentially
the savior for them.

Speaker 3 (30:31):
Yeah, so I think maybe I just want to preface
all this by saying there's been a lot of sort
of reporting suggesting that AI is like a climate disaster
and that you shouldn't use AI models because it's irresponsible
for your personal energy use. I'm not necessarily persuaded based
on the numbers I've seen that like an individual chat

(30:53):
Gibt Querreya is really going to move the needle on
anyone's individual kind of energy consumption. However, the AI companies
are not just thinking about today, They're looking two, three,
four steps ahead right now. The bottle neck for AI
development has been talent, data and chips, right That's why

(31:16):
and video cross of five trillion dollar valuation last week,
I think is because there's been like more companies that
want to buy lots of these computer chips, then there
are companies that can make them more design them. All
the big companies are looking ahead and going, well, you know,
it's probably not that far away that the real bottleneck

(31:38):
becomes energy, because if you just keep multiplying the size
of your data centers by like ten, ten, ten, you know,
things get pretty crazy pretty quick. If you look at
energy production in a country like the US, it really
hasn't grown that much since say twenty ten, when China

(32:00):
overtook the US as the largest electricity producer in the world.
And so what these companies are doing, the AI companies
is looking at ways to kind of shore up their
access to electricity going forward, because again they're all in
this race. They don't want to be the one that's
missed out on electricity. And so that has meant in

(32:21):
the short term that we're seeing we're likely seeing you know,
coal fire generation being kept online longer than it would
have because these are the sort of AI is being
used as like a rationale for keeping older, dirty infrastructure online.

(32:43):
That is the real climatic impact of AI for now.

Speaker 4 (32:47):
But what we're also seeing is companies using their vast
access to capital to invest very strategically and technologies that
they think could provide energy or the future, and that
actually could bring those technologies onto the GRED sooner and

(33:08):
help decarbonize.

Speaker 3 (33:09):
The grid, not just for them but for everyone. And
so I mean, yeah, if you look at a company
like Google, they sort of had a pretty bad year
in twenty twenty three, those sort of emissions shot up
because of AI And yeah, if you look at twenty
twenty four, even though the energy used for their data
centers grew, their emissions actually fell because of some of

(33:34):
those renewable projects coming online. The holy grail of these
renewable projects is fusion energy. Nuclear fusion, for anyone who
is sort of unfamiliar with nuclear fusion, is in a
simple sense, it's the opposite of nuclear fission, which is
how all the nuclear power plants that were familiar with work.
Instead of splitting heavy atoms apart, you're smashing very light

(33:58):
atoms together, and that process releases an immense amount of energy.
This has been theorized about since you know, the nineteen
forties are but it has proved an insanely difficult engineering challenge.
As you mentioned, it's felt like it's thirty five years
away for ever. But we've seen investment in this has

(34:21):
traditionally been like a government project led by governments all
around the world. There's now a growing private fusion industry,
with New Zealand's open Star among the private players in
the space. Funding has just exploded from I want to say,
just shy of two million US dollars and twenty twenty

(34:42):
to have just hit like fifteen sorry billion, fifteen billion
dollars September this year.

Speaker 1 (34:50):
Up from two billion, was up from two billion, wow,
so you know, a massive growth.

Speaker 3 (34:54):
A lot of that funding has come from the same
players that are familiar household names and Ai sam Oltman,
Google soft Bank, which is a pretty significant open AI
investor in general catalyst. The experts that I spoke to you,
independent experts outside industry, they told me that this massive

(35:14):
influx of funding has credibly brought fusion closer to actually
getting on the grid. No one has demonstrated a fusion
reactor yet that can produce more energy than is used
to run the entire reactor. So this is like a
very important technical milestone. If you want to actually put

(35:36):
power on the grid, you need to make more power
than you're using. Some of the experts I spoke to
think that we're sort of on path to get that
by the mid twenty thirties, which would be a huge deal.
If that's true. Fusion could be built where you need
it rather than where wind and solar or abundant doesn't
have the long lived radioactive waste of nuclear fission. There's

(35:57):
a lot of advantages, but they're are a bunch of
players in private industry who are saying they're going to
bring it much sooner than and a lot of experts
are kind of skeptical of whether we get there. I
don't think they're talking about open star, just to be
very clear. I mean when I spoke with Ratu, he's
saying that they've kind of got this I think four

(36:18):
step plan where they build these successively bigger and more
impressive reactors to get to prove their technology to a
point where they feel comfortable, you know, shooting for a
commercial deployment. Right now. I think sor they've finished their
first machine, they're working on their second machine. There's four

(36:39):
in total. Each machine takes sort of two to three years,
so you know, ballpark six to nine years to get
to this machine. That they think could demonstrate electricity generation.

Speaker 1 (36:50):
And a great thing about a company like open Star
is the technology the levitating dipole, you know, this big
floating magnet in the middle of this plasma reactor. Even
if they can master a component off that reactor, they
can sell that technology to all the other players. So
it doesn't they don't have to be the first to
get to building a fully fledged reactor generating power. It's

(37:14):
all the components that go into that will make that
potentially a very valuable company.

Speaker 3 (37:19):
Yeah, I think with open Star, you know what they
were able to achieve, which is produced in plasma, which
is this state of matter. It's like a superheated, pressurized
charged gas. They were able to achieve the plasma that
you need. It's like a prerequisite for a fusion reaction.
With ten million dollars in funding, sounds like a lot

(37:40):
of money. That is very, very cheap, And so you know,
I think a lot of people are optimistic that. I mean,
this is something you know much more about. But if
you look at rocket Lab, the things that they've been
able to achieve on compared to the aerospace industry across
in the US on really a shoot string budget I

(38:04):
think there's some parallels between them and open stuff.

Speaker 1 (38:07):
Yeah, as Sir Ernest Rutherford, who split the Atom good
Kiwi from Nil Nelson said, you know, if we don't
have the money, so we have to think, and that's
exactly what these entrepreneurs are doing. Just finally, Harry, as
you look to twenty twenty six, what are the sort
of the key areas of AI that you really want
to focus on.

Speaker 3 (38:25):
Yeah, it's a great question. I think one area I
want to follow more closely is, as we mentioned agents,
does this doubling in time horizon continue? Do we see
that continue at a seven to four month pace, and
where does that land us? And what are the inputs
to ensure that progress continues. One of the components of

(38:49):
this is expert data curation. So historically you might have
heard language models just work by predicting the next word
and an abstract stracts. There's some truth to that because
they've been chained on large corpuses of Internet text. But
what we're seeing now is companies paying professionals journalists or

(39:11):
financial professionals or mathematicians to basically record every aspect of
their workday so that that can be fed into the
machine At the same time, we're seeing researchers build called
reinforcement learning environments. So you can imagine researchers building the

(39:31):
world's most boring video game. You built a video game
where you've got access to an email in box, your slack,
and maybe a web browser, and then you sort of
get the AI to self play to learn how it
might operate in those environments. I'm really interested to see
how those two innovations kind of take AI from being

(39:52):
a sort of word machine to maybe a more agentic system.
And then I'm also really interested to see, you know,
we've got some huge, huge infrastructure build outs underway open air.
Stargate's obviously well known, but you know, Amazon's got a
large data center with these custom design Traineum chips for

(40:14):
anthroop work. This thing's happening all over the place. As
those systems come online and we see these really large
scale training runs, do we see progress continue? And when
they say size, you know, I'm talking about the amount
of computing power that went in to train a particular

(40:37):
AI model. This sort of an open debate right now
as to whether giving systems more computing power in the
training phase actually leads to better results. GPT five was
actually slightly smaller researchers are guessing than GPT four point five.
So it's kind of this open debate of like, has
this paradigm that's driven progress for the last ten years,

(41:00):
is give systems more compute at training started to fall off?
Or is it just a case that the AI companies
just haven't had enough computing infrastructure to do the scale
of these training runs that they'd like to. As we
see this new infrastructure come online, maybe we'll see that.

Speaker 2 (41:18):
Hey, thanks so much, Harry.

Speaker 1 (41:19):
We'll link to obviously all your great writing on Time
and your substack as well. And thanks so much for
coming on the Business of Tech.

Speaker 3 (41:26):
Thanks for having me on, Peter.

Speaker 1 (41:29):
Thanks to Harry Booth AI reported for Time, offering honest
and quite measured insights into how artificial intelligence is transforming jobs,
raising new safety questions, and shaking up the business models
powering the next era of tech. It's really important as
the AI revolution accelerates that we do have really good

(41:51):
independent journalism and analysis of what this all means for society.
And I really appreciate Harry's approach as a young reporter
cover this area, not being swayed by the constant punditree
and height, but actually during the hard yards interviewing experience
and people affected by the growing influence of AI. He's
got a great platform at time to really explore this

(42:14):
area in depth.

Speaker 2 (42:16):
I'll link to.

Speaker 1 (42:17):
Harry's articles in the show notes available in the podcast
section at www dot Businessdesk dot co dot Nz. Thanks
for listening to the Business of Tech as we enter
the home straight for the year. Have a few episodes
still to come for you, so tune in next week
and I'll catch you then.
Advertise With Us

Popular Podcasts

Las Culturistas with Matt Rogers and Bowen Yang

Las Culturistas with Matt Rogers and Bowen Yang

Ding dong! Join your culture consultants, Matt Rogers and Bowen Yang, on an unforgettable journey into the beating heart of CULTURE. Alongside sizzling special guests, they GET INTO the hottest pop-culture moments of the day and the formative cultural experiences that turned them into Culturistas. Produced by the Big Money Players Network and iHeartRadio.

Crime Junkie

Crime Junkie

Does hearing about a true crime case always leave you scouring the internet for the truth behind the story? Dive into your next mystery with Crime Junkie. Every Monday, join your host Ashley Flowers as she unravels all the details of infamous and underreported true crime cases with her best friend Brit Prawat. From cold cases to missing persons and heroes in our community who seek justice, Crime Junkie is your destination for theories and stories you won’t hear anywhere else. Whether you're a seasoned true crime enthusiast or new to the genre, you'll find yourself on the edge of your seat awaiting a new episode every Monday. If you can never get enough true crime... Congratulations, you’ve found your people. Follow to join a community of Crime Junkies! Crime Junkie is presented by audiochuck Media Company.

Stuff You Should Know

Stuff You Should Know

If you've ever wanted to know about champagne, satanism, the Stonewall Uprising, chaos theory, LSD, El Nino, true crime and Rosa Parks, then look no further. Josh and Chuck have you covered.

Music, radio and podcasts, all free. Listen online or download the iHeart App.

Connect

© 2025 iHeartMedia, Inc.