All Episodes

June 11, 2025 • 26 mins

This week on The Fin, technology editor Paul Smith discusses the AI future and whether it is more likely to be utopia or dystopia.

This podcast is sponsored by Aussie Broadband.

Further reading: 
Waymo’s robot driver was too scared to take us where we wanted to go
The self-driving taxis are an experience to remember but their post LA-riot nerves rendered them unable to use human logic, and left us stranded.
Apocalypse or a four-day week? What AI might mean for you
If you’re “AGI-pilled” and you believe artificial intelligence will soon surpass humans, you’re probably worried about your job. But insiders reckon that might be the least of our problems.
From ‘lucky country’ to ‘left-behind country’: Matt Comyn’s AI warning
Executives say Australia is in danger of falling behind as the rise of artificial intelligence creates a profound change in the way people and businesses work.

Save 50% or more on unlimited access to the Australian Financial Review in our EOFY sale, ending June 30.

See omnystudio.com/listener for privacy information.

Mark as Played
Transcript

Episode Transcript

Available transcripts are automatically generated. Complete accuracy is not guaranteed.
Speaker 1 (00:04):
The Australian Financial Review.

Speaker 2 (00:07):
I use AI in my assignments information sheets. I can
pay directly into this thing called turboland dot A and
it uses AI to generate me study notes, flash cards,
quizzes and podcast The.

Speaker 3 (00:21):
Other day I asked it to create me a running
program to get fitter for on my sport, and it
created a five week running plan with long runs into
full training.

Speaker 1 (00:32):
Teenagers are already using AI as part of their everyday lives.

Speaker 2 (00:39):
Is I make it summarize my notes and turn them
into a podcast and I listened to it like on
the day of exams. I'll put the whole grading rupric
into chat GPT and then put my assessment and ask
for harsh feedback.

Speaker 1 (00:53):
This was a big focus at the Financial Reviews AI
summit last week. Tech leaders, chief executives and signed we're
all asked what they were doing to prepare their kids
for the future. The answers varied from encouraging them to
become AI ninjas to doubling down on the human experience
by studying landscaping or philosophy. But everyone agrees that we

(01:17):
are hurtling toward an AI disrupted future and our kids
will be the ones dealing with the fallout, and while
there's optimism about scientific and medical breakthroughs and a much
needed boosting productivity, there's also concern about job losses and
power concentrated among a handful of mercurial tech billionaires.

Speaker 4 (01:39):
The thing that I find a bit frightening at the
moment is it's so unclear what the next generation going
through high school, how they're going to plot this path
over the next few years, because they're finishing school and
going into workforce when nobody in the workforce even knows
what they're going to need in the next few years.

Speaker 1 (01:55):
Welcome to the Finn I'm Lisa Murray. This week technology
Paul Smith on the AI future and whether it's more
likely to be utopia or dystopia. It's Thursday, June twelfth. Hi, Paul,

(02:24):
thanks for coming on the podcast.

Speaker 4 (02:25):
Thanks for having me.

Speaker 5 (02:26):
Lisa.

Speaker 1 (02:27):
You hosted a few panels at the AI Summer last
week and spent the day working the room. There was
a general feeling that it's all happening much faster than
we expected, and companies, investors, economies and governments aren't ready
for it. What were your main takeaways?

Speaker 4 (02:45):
Well, yeah, you're right, there was definitely a sense of
foreboding the Australias in danger of falling behind. But overall
I came away with a sense that there was a
really clear need to have some frank conversations about both
benefits and dangers of a shift that's coming that's going
to redefine national productivity, employment and in society itself for

(03:07):
decades to come. We had a couple of Australia's biggest
chief executives, conwalth Banks mac Common and Tawsha's Vicki Brady
appearing at the summit as well, and they've both spent
a lot of time over in Silicon Valley recently and
really are positioning themselves as forward thinking in the AI
realm and keen to make sure that their organizations aren't

(03:27):
left behind. And they both admitted that things are moving
at a faster rate than they previously expected and had
some open questions about how it's going to affect their
companies now. One of the most interesting speakers there was
Liezel Yearsley, the founder of Australian artificial intelligence company Akin.
She's a real veteran of the AI scene in Australian

(03:48):
has built successful chatbots way before they were a thing
that everyone was chasing. She said, Australia's grossly underestimating the
scale of change AI is going to bring, and put
it in a historical context.

Speaker 6 (04:03):
I think the magnitude of what's coming at us is
nothing less than the transition we saw between the eighteen
hundreds and the nineteen hundreds.

Speaker 4 (04:11):
She pointed out that in the eighteen hundreds we were
all involved in manual labor.

Speaker 6 (04:15):
The eighteen hundreds, ninety five percent of us were pushing
a plow through the dirt, and that's what we did
with our time and our effort. We were dominated by.

Speaker 4 (04:24):
Muscle, and then we moved on from that with muscle
to machines in the Industrial revolution, and.

Speaker 6 (04:30):
What we did with industrial revolutions machines replaced muscle, completely
transformed our own entire planet.

Speaker 4 (04:36):
And then with machines doing most of the manual work,
we now work mostly with our brains.

Speaker 6 (04:42):
You know, ninety five percent of labor is cognitive, and
we AI is coming after absolutely everything.

Speaker 4 (04:52):
So she paints a worrying picture for a lot of us, really,
because if ninety to ninety five percent of the work
is now able to be done by box, that doesn't
leave a lot for the rest of us to think about.

Speaker 1 (05:02):
Does it The pace of change was repeatedly emphasized at
the summit, But would you say the mood overall was
generally optimistic, that most people were signed up to a
more utopian view of the AI future.

Speaker 4 (05:15):
Well, that's right. I mean, you're not going to have
too many people turn up to an AFR AI summit
and say that they're going to end the world, So
you kind of have to leave that floating in the
background a little bit and hear the case for the
positive impacts. And I might sound like I'm on the
side of the doomsayers, but there really are some very
positive examples of how AI has already made a major

(05:38):
difference and some positive changes that were highlighted. We had
James Manyika, who's Google's head of Technology and Society, presenting
early on in the summit, and he was talking about
the scientific breakthroughs that AI has already helped them usher
in over at Google. Google's team won a Nobel Prize
for using AI to predict the shape of proteins, and

(05:58):
he's talked about the potential for AI to improve diagnostics,
detect natural disasters, and how autonomous driving is taken often
people can be away from dangerous positions on mind sights
and can hold online meetings in different languages in real
time without a human translator. I think that was a
really tangible example, and they've demonstrated that technology recently is

(06:20):
in development. That really is the first example, one of
the first examples that you see of real sci fi
stuff that you'd really want.

Speaker 1 (06:27):
You see the efficiency gains of having that conversation.

Speaker 4 (06:31):
Being on holiday and going into a shop and being
able to speak to someone without cracking open a phrasebook
and pointing and shouting. We also had Craig Blair on
Wile of the Panels. He's the founder of er Try Ventures,
one of the biggest venture capital companies in Australia, and
he was talking about how AI is already helping startups
in his portfolio go from the idea stage to making

(06:51):
profits in just months with a much smaller team than
they would have in the past, so really fast forwarding
the creation of companies. And one point that people said
was that Australia really hasn't scratched the surface yet of
making a fortune about our natural resources and our natural
strength in terms of being a great location to host

(07:11):
data centers and other infrastructure that's going to power the
AI boom. And so we've got abundant renewable energy options
here and a lot of space that hasn't been used yet,
and a very stable, relatively political and business scene. So
the opportunity is there for Australia to make a mosa
from the AI revolution, but it really hasn't been fully

(07:34):
tapped yet.

Speaker 1 (07:35):
That was definitely said repeatedly, wasn't it. We have space,
we have renewable energy, and we have a general enthusiasm
for new technology.

Speaker 4 (07:43):
That's right, Yeah, Paul.

Speaker 1 (07:44):
The focus at the summit was not so much about
generative AI. Last time we had you on the podcast,
we were probably talking about that it's the chat GPTs
of the world, But now everyone's talking more about agentique AI,
that is, AI agents that can go further than chet
GPT and carry out task. Explain properly the difference and

(08:08):
give some real life examples of how these AI agents
are already working.

Speaker 4 (08:13):
You did a pretty good job and explain yourself that
I mean it basically is that I mean I remember
the first time I came on the podcast to you,
we were also excited by chat GPT is a fun,
novelty thing that we had it right a rap for
me to do and I don't think I nailed it,
but you know, just ask it a question and watch
its unfold. That's Australian Financial Review is sharp and bold

(08:35):
and even Bill Gates knows the deal. Chat GP teams,
revolutions are going to change the way we feel.

Speaker 5 (08:43):
You got there.

Speaker 4 (08:43):
Yeah, it was fun. So that's generos of AI. It
creates new content from prompts that you've asked for, whereas
agentic AI is much more obviously valuable to businesses and
much more clear how it's going to improve productivity and
maybe be changed the way people work. So it's it's
like having an assistance or an agent who can do

(09:06):
things for you. They can autonomously make some decisions and
take actions to complete tasks without needing a human constantly
prompting and changing what it wants it to do. So
in a consumer setting, it would be like asking an
agent say can you please book me a flight to
Brisbane next week and find me the best price, and

(09:27):
it will just go off and do it for you,
navigating through the different websites making payments. If you give
it permission, that kind of thing now from a business perspective.
At our summit, we heard from Suncorp about how it's
got AI agents that help it detect problems before they arise.
For example, in cyclone Alfred, it was able to predict
which houses were going to be hit and more likely

(09:49):
to have problems, so it prepared them to respond to
claims fast, as they said. And we had am Z's
chief technology officer though as well, talking about how they've
got AI agents reading through hundreds of pages of loan documents,
getting property valuations and things like that, and he was
saying that these agents have removed a day's worth of
work for people in assessing complex corporate loans. So yeah.

(10:10):
Agentic AI is also increasingly common in the area of
tech development and coding, where the concept of vibe coding
was spoken about quite a lot, which means well, anyone
who doesn't know how to code can ask an AI
platform to design something like a website or an app,
and they get codes spat back at them and see

(10:31):
the results of it when it's executed, and can refine
it with further prompts, so they don't even really need
to know how to code, but can set about coding.

Speaker 1 (10:39):
With all of those AI agents running around, there's a
real debate now about the impact on jobs. There are
some extreme predictions out there, some of which have come
out in the last few weeks, that it could wipe
out half of all entry level white collar jobs. Do
you think that could happen.

Speaker 4 (10:56):
There's certainly no way of saying that it won't happen.
Do you refer into the chief executive of the huge
AI company, Anthropic Dario Amadai, who a few weeks ago
said in an interview that AI could send the unemployment
rate in the US to up between ten percent and
twenty percent in the next one to five years. So
that's as the technology moves from helping humans do their

(11:16):
jobs to replace them outright, and is often happening first
in the tech developer space, which is ironic that people
that would have maybe been designing these systems and thought
that they would be right for years to come are
the ones that have found themselves being disrupted first. There's
been reporting as well from the US again that the
unemployment rate for graduates has picked up as managers have

(11:39):
been encouraged to go AI first, and a lot of
the jobs that have been harder for graduates to get
are in areas that AI has been typically strong, like
finance and programming and development, and Telsha's CEO, Vicky Brady,
said that it's important to be honest with employees and
that she thinks that Telsha's workforce is going to be

(11:59):
smaller in five years.

Speaker 5 (12:01):
When you're a leader, I think that transparency, honesty is
so incredibly important, and how do you do that in
a way where you also don't want to panic people.

Speaker 4 (12:12):
She stressed that she wasn't sitting there with a number
of how much smaller are the organization's going to be
that she was keeping secret.

Speaker 5 (12:19):
I don't know what our workforce looks like in five years,
but what I do know is I think jobs are
going to look different. I think it's likely our workforce
will be smaller.

Speaker 4 (12:30):
So it's hard to know whether it's going to be
a job apocalypse. But the feeling out of the summit
is that the impact of this is going to be uneven.
It's going to create some jobs and replace some jobs.
There's still a lot of talk about having humans in
the loop. We're going to keep you in the loop.
You in the loop until you realize you're not, And
so this is to have people keeping an eye on

(12:51):
the output and avoid rogue agents running around. And I
think the thing that came obvi is to me though,
and it's been obvious for a little while now, is
that the line that's been regularly used by tech companies
and executives responsible for AI, that AI is on again
augment rather than replaced workers is palpably false. But well,
we're all worried about these job losses. They might end

(13:13):
up being the least of our problems. According to Liesel
Yearsly here we spoke about before. She had a lovely
phrase where she said it's bringing out the worst of capitalism,
and she was veering towards a really dystopian view of
the AI future.

Speaker 6 (13:29):
What we're not really thinking about is that we're actually
creating a thing that has a form of sentience. It's
a coevolution. It's a fundamental shift to our society and
our species. So I think the magnitude of shift that's
coming is nothing like we've seen. We don't have a

(13:49):
generation to adjust this time. It's happening in a regional
space of time.

Speaker 4 (13:55):
And there are more extreme views out there as well.
In April former researchers from Open Ai and some other
respected people in the AI industry over in Silicon Valley
released this report called AI twenty twenty seven, and it
has caused a bit of a stir in Silicon Valley,
lots of people debating it. It basically describes a fictional

(14:17):
scenario based on some evidence and some theorizing about what
could happen when AI systems surpass human level intelligence, which
they all expect it to do in the next few years,
and what might happen when AI gets away from us,
And to be honest, it's not looking good for humans.

Speaker 1 (15:00):
We're talking about what an AI future looks like, and
there are competing views. The utopian view highlights the potential
for scientific and medical breakthroughs and huge boost in productivity
that will add billions to the economy, but there are
more dystopian views about the impact of AI on society,
high unemployment, and even apocalypse. What is AI twenty twenty seven, Well.

Speaker 4 (15:26):
First of all, it was a rippin read. It's a
scenario planning exercise that's been conducted by an expert panel
led by a former OpenAI insider, and it basically takes
the scenario from roughly where we are today and tries
to realistically assess what happens if certain decisions are made

(15:48):
and they are developing these AI systems, and the aisystems
are then start helping to develop the next versions of
the AI systems, and they somewhere along the way lose
the ability to truly see what the AI systems are
trying to do, and the AI begins to be able
to hide its true intentions. Things spiral out of control.

Speaker 1 (16:10):
So it's this narrative style warning about what might happen.
It tells the story of what might happen.

Speaker 4 (16:17):
That's right, And ultimately we get to the brink a
real cold war between the US and China, and there's
a decision to be made in twenty twenty seven about
whether the US company, which is a few months ahead
of the Chinese one, whether it stops to let humans
regain control of what's happening, or whether they press on.
In the SNAI where the company slows down, things become

(16:40):
a little bit more manageable, still not great, but manageable.
But in the scenario where they press on five six years,
there's no more humans left on Earth. I mean, the
Earth is covered in data centers and other AI infrastructure,
and we have been taken out of the game.

Speaker 1 (16:55):
And we've been taken out of the game because they
need the space and the power to keep.

Speaker 4 (17:00):
We seeing themselves. Yes, he's being useful.

Speaker 1 (17:03):
So as you said, a ripping read it is, Yeah,
are people taking it seriously?

Speaker 3 (17:08):
Now?

Speaker 4 (17:09):
People are taking it seriously in so far as these
aren't idiots putting it together and they aren't actually coming
out and saying this is what they think will happen.
They're just putting scenarios out there to focus minds on
the discussion, and they point out that there's a genuine
concern about the amount of power being concentrated in the

(17:29):
hands of a really small group of tech billionaires who
may or may not have the best interest of humanity
at heart. So it's really a conversation startup. And I
had the opportunity to ask one of open aye's most
senior executives about that, Jason Quan, who's their chief strategy
officer and who's worked with Sam Altman from years backwards

(17:50):
in Sydney, and we had had a chat about it,
and he clearly disagrees with this sort of dystopian ending
of it, and obviously wouldn't be doing what he's doing
it if he didn't, But his response was pretty measured
to it. He thought it was a good narrative and
he thought it was worth having these conversations. But he
really thinks the best way to understand the impact of

(18:11):
these products is to start using them and see what
sort of problems arise. And he thinks in a much
longer term horizon, they talk about a thirty year time
arc whereby these things will cause major changes. But society
works out a way because and it's in no one's interest.
It's not in the AI companies themselves interests, it's not

(18:33):
in government's interests for everything to fall apart. So he's
got a more optimistic view that we will figure things out.

Speaker 1 (18:41):
These are all very big issues. There's a lot at stake,
and yet the Australian government doesn't yet have an AI policy.
Are we being left behind? How do we compare to
other countries?

Speaker 4 (18:54):
Well, the nature of politics means that it feels like
we're back at square one. We did have a policy
plan of sorts. It's due to be announced at the
end of the year, but that was very much embodied
in Ed Husick, who was the Industry and Science Minister
for the last term of government and spent a lot
of time going around and talking to the industry, talking

(19:15):
to stakeholders about what the changes were, whether we need
an AI Act like they've got in the European Union,
or just what the country's position should be. And he's
obviously no longer in the cabinet. He was replaced by
Tim Ayres, who spoke at our AI summit, and it's
perhaps unfair to compare them both, but it was interesting
because both Tim and Ed appeared at the summit. So

(19:37):
tim Ayres had an interview with myself on stage where
I asked him about whether we should have an AI
Act and what he thinks about how to create more
big AI companies from Australia, and he really he sort
of didn't have any answers yet, and it's maybe understandable
he's only just in the job, but he was kind
of saying, well, I've got to go back and talk

(19:57):
to my colleagues about this in the industry, and where
he had Ed Husick jump up on a panel next
and not trying to make him look silly, but he's
already done all of that consultation saying in his view,
we do need an AI Act just to give some guidelines.
He described a Swiss Cheese approach to regulation at the moment,
which is not going to be helpful to anyone, where
rules get put in place when something goes wrong, and

(20:20):
so there was a sense that regulations struggling to keep up.
There's not widespread agreement with ed music at all that
we need an AI Act in Australia. There's views that,
certainly amongst people in need take industry, that in the
EU it's become too restrictive and actually stops them being
able to release new products there because they're always worried
about it breaking the rules. But there is a sense

(20:40):
at the moment that we have a bit of a
vacuum in terms of clear direction about how employment policy,
about how workplace policy, about how innovation policy needs to
interact to get Australia sort of motoring on the global stage.

Speaker 1 (20:57):
All of this depends on where we are on the
path to superhuman intelligence. So from using check GPT for
this and that to massive change in the way governments,
businesses and people do things. Where are we up to?

Speaker 4 (21:12):
So that's the big question, and the multi billion dollar question.
There's a term that's taken off recently Open Silicon Valley
of being AGI pilled and AGI meaning artificial general intelligence.
That is where artificial intelligence equals the best of humans,
and then the next step beyond that is the superintelligence,

(21:34):
which is where it outstrips us in all these areas
as well. And the phrase that people have talked about
being agi pilled is a reference to the nineteen ninety
nine movie The Matrix, where humans could take the red
pill to wait from the dream and see the real
world done by AI systems, or take the blue pill
and stay in their dream and they're nice, comfortable existence.

Speaker 5 (21:53):
Yeah.

Speaker 4 (21:54):
So industry luminary is like Demis Hasarbis, who's the chief
executive of Google Deep Mind and a real pioneer in this.
He's been one of the ones that's been I guess
agi pilled and is increasingly convinced that it's going to
arrive imminently. Sam Altman from Open Ai has said it
could be this year. I don't know whether he's walked
that back. That was a little while ago he said

(22:15):
that and topics. Dario Amadi said in January that he
could see a form of AI that is better than
almost all humans at almost all tasks emerging in the
next two to three years. Then there's people in Australia
like Toby Walsh, who's a very well respected AI big
thinker at the University of New South Wales. He's written
numerous books on this.

Speaker 7 (22:36):
Yeah, no, I think we are at an interesting.

Speaker 4 (22:38):
Point, and he thinks the timeline may be a little
bit more stretched than people think. He says, people always
underestimate the last few percents.

Speaker 7 (22:48):
I saw this was self driving cars. You know, getting
to ninety five percent was easy. The last bike has
proven to be very difficult, and I think the same
would be true for more general intelligence as well.

Speaker 4 (23:02):
But he does think that the impact on jobs is
starting to happen and will only ramp up.

Speaker 7 (23:07):
People are right to be concerned because it's starting to happen,
and it's starting to happen in places where I think
many people that they were going.

Speaker 4 (23:13):
To be safe, like the coders that were building the systems.
And you know, for the last decade we've been talking
about get your kids to learn to code, and now
we're being told that, well, actually your kids need to
watch AI code. But in some good news, Taby was
saying that he thinks AI could bring us to a
four day week and one that doesn't actually reduce the
amount of productivity that people put out in the workplace.

Speaker 7 (23:36):
And they always spin up two results. One is that
people are largely as productive in four days of work
as they were in five, so you can pay them
as much. There's no less productivity. You know, people then
have as many bullshit meetings and so on. And secondly,
people are happier who would have imagined.

Speaker 4 (23:51):
And if they don't work in essential around the clock
workers like healthcare where you physically need a person that
they could do their job in four days, and then
maybe we have a three day weekend.

Speaker 1 (24:00):
We can all get behind that. Paul A final question,
utopia or dystopia, what dictates which it will be?

Speaker 4 (24:09):
Well, I think realistically this isn't stopping. There's too much
at stake the idea. If the big US companies pause,
then China will raise ahead and that would be a
disaster for them. So I think we have to assume
that people are going to keep trying for this, no
matter if someone is worried about it over here in Australia,

(24:31):
And it really depends on how quickly these next breakthroughs
are made, it's hard to feel too optimistic that the
right incentives will win out, that people will be building
systems only for the benefit of society, because we've seen
from the history of technology companies in a social media
era that profits win out, and that you can't always
trust the people in charge of the companies to do

(24:51):
the right thing. I think the big societal terrifying, world
ending scenarios, I think maybe we park that and think
that's science fiction at least for our lifetimes, and hopefully
otherwise what we're doing sat here talking about it. But
in terms of the thing that really I find a
bit frightening at the moment is that it's so unclear

(25:13):
what the next generation going through high school, going to university,
how they're going to plot this path over the next
few years, because they're finishing school and going into a
workforce when nobody in the workforce even knows what they're
going to need in the next few years. So how
they make those decisions going to be really important. But
you know, there's big opportunities out there as well. I mean,

(25:34):
there's the scientific breakthroughs that could be made, the environmental breakthroughs,
but you know overall I'm infuriating on the fence. I'm
worried about a lot of it. I'm optimistic about a
lot of it because like everyone else, I just don't
know how it's all going to end.

Speaker 1 (25:50):
Thank you for listening to The Finn. I'm Lisa Murray
with Financial Review Technology editor Paul Smith reporting today. The

(26:12):
Finn is produced by Alex Gau with assistance from Mandy Coolan.
Fiona Buffini is head of Premium Content. Our theme is
by Alex gu If you like the show and want
to hear more, follow us wherever you get your podcasts,
and consider rating and reviewing us as it helps others
find us. For more stories about markets, business and power,

(26:34):
Subscribe to The Financial Review at AFI dot com slash
subscribe See you next week. The Australian Financial Review
Advertise With Us

Popular Podcasts

Stuff You Should Know
Dateline NBC

Dateline NBC

Current and classic episodes, featuring compelling true-crime mysteries, powerful documentaries and in-depth investigations. Special Summer Offer: Exclusively on Apple Podcasts, try our Dateline Premium subscription completely free for one month! With Dateline Premium, you get every episode ad-free plus exclusive bonus content.

On Purpose with Jay Shetty

On Purpose with Jay Shetty

I’m Jay Shetty host of On Purpose the worlds #1 Mental Health podcast and I’m so grateful you found us. I started this podcast 5 years ago to invite you into conversations and workshops that are designed to help make you happier, healthier and more healed. I believe that when you (yes you) feel seen, heard and understood you’re able to deal with relationship struggles, work challenges and life’s ups and downs with more ease and grace. I interview experts, celebrities, thought leaders and athletes so that we can grow our mindset, build better habits and uncover a side of them we’ve never seen before. New episodes every Monday and Friday. Your support means the world to me and I don’t take it for granted — click the follow button and leave a review to help us spread the love with On Purpose. I can’t wait for you to listen to your first or 500th episode!

Music, radio and podcasts, all free. Listen online or download the iHeart App.

Connect

© 2025 iHeartMedia, Inc.