All Episodes

May 27, 2025 99 mins

The rush of technology that is confronting us has gone unnoticed by most of the world’s inhabitants.

That is, until now. The moment we, as individuals start to recognise the magnitude of A.I. is likely to rearrange our view of the future: both short and long term.

The question is, how do we handle it going forward.

Nigel Horrocks and Justin Matthews recently established a Substack, CREATIVE MACHINAS.

Nigel has had an illustrious career in journalism, internet development and is a recent graduate from Oxford University in AI.

Justin Matthews is a senior lecturer in digital media at AUT.

They share with us what they know of the present and what’s ahead.

And we visit The Mailroom with Mrs Producer.

File your comments and complaints at Leighton@newstalkzb.co.nz

Haven't listened to a podcast before? Check out our simple how-to guide.

Listen here on iHeartRadio

Leighton Smith's podcast also available on iTunes:
To subscribe via iTunes click here

See omnystudio.com/listener for privacy information.

Mark as Played
Transcript

Episode Transcript

Available transcripts are automatically generated. Complete accuracy is not guaranteed.
Speaker 1 (00:09):
You're listening to a podcast from news talks it B.
Follow this and our wide range of podcasts now on iHeartRadio.
It's time for all the attitude, all the opinion, all
the information, all the debates of the now the Leighton
Smith podcast coward by news talks it B.

Speaker 2 (00:27):
Welcome to podcast two hundred and eighty six for May
twenty eight, twenty twenty five. Well, this is going to
be the shortest intro I've ever done. I believe the
following discussion runs about an hour fifteen minutes and it is.
It's viteful to a lot of people to hear it.
It's also interesting for those who think it won't be interesting,

(00:48):
at least that's my interpretation. So we have the discussion,
then we have the mail room, and then there is
a little more on AI at the back end before
we go. But like it or not, AI is fast
changing the world in many industries. It may not have
affected or involved you, yet, just as the Internet now

(01:09):
encompasses everything and involves all of us, from how we
communicate via email to shop online, AI will change the
way that we do a lot of things. I was
surprised myself. Now, whereas the Internet took some years to
reach the mass population. AI is changing industries quicker than
you would imagine, and it's bringing serious ethical questions about

(01:32):
what we're letting ourselves in for and who is pulling
the strings to control it. The big tech companies are
pouring literally billions into research to be the market leader.
So after a very short break, I'll introduce you to
both Nigel Hurricks and Justin Matthews, who have put together

(01:53):
a sub stack, but just as importantly, or if not
more so, they both have careers and histories that have
led them to this very event AI with some considerable knowledge.
Buccolan is a natural oral vaccine in a tablet form

(02:15):
called bacterial l sate. It'll boost your natural protection against
bacterial infections in your chest and throat. A three day
course of seven Bucklan tablets will help your body build
up to three months of immunity against bugs which cause
bacterial cold symptoms. So who can take buccolan well, the
whole family. From two years of age and upwards. A

(02:36):
course of Buckelan tablets offers cost effective and safe protection
from colds and chills. Protection becomes effective a few days
after you take buccolan and lasts for up to three
months following the three day course. Buccolan can be taken
throughout the cold season, over winter, or all year round.
And remember Buckelan is not intended as an alternative to
influenza vaccination, but may be used along with the flu

(02:59):
vaccination for added protection. And keep in mind that millions
of doses have been taken by Kiwis for over fifty years.
Only available from your firemist. Always read the label and
users directed, and see your doctor if systems persist. Farmer
Broker auckland Leyton Smith AI it's become more prominent in

(03:26):
my life of recent times. I mean, there is Ai
Fabler who we've had on the podcast once or twice,
and he writes very good stuff, but that doesn't tell
us much about the whole AI industry. Let me read
you some headlines. Every one of these is coming in
the last week. One was just this morning. On both
sides of the Tasman we are too fearful of AI.

(03:47):
Doug Bergham warns, whoever wins the AI race controls the world.
Who's Doug Bergham? He is the Interior Secretary responsible for
managing the well more than five hundred and seven million
acres of federally owned land in the US, and he
is haunted by a fear that seems at first glance
outside his mandate. He worries the free world will lose

(04:07):
dominant in the field of artificial intelligence, and with that
the future. Then there's a cap on it. So does
the president forget AI. Students are already cheating their way
through exams and last for the moment is more, but
last for the moment is the least important one. Chris
Luxen defends National's use of AI. So it's getting more

(04:29):
and more publicity and it's affecting more and more lives.
Nigel Horrack was the news editor at Newstalks NB well
when it became Newstalks EDB in that period, and he is,
according to people who have informed me or reinformed me,
the best news editor that we ever had. What more

(04:50):
can I say? He also is a pioneer, or was
a pioneer in internet matters. But he's also done something
very recently. He did a course at Oxford University, well
not Oxford, it was online, but it was a course
on AI. And the important thing is my understanding, is
that he almost threw in the towel part way through

(05:12):
because it was so so hard he didn't and he
ended up getting ninety percent, which at Oxford I'd have
to say is ginormous. Nigel, Welcome to the Late and
Smith podcast. It's great to have you here.

Speaker 3 (05:25):
Thank you for having me here, Leason, and do good
catch up again.

Speaker 2 (05:28):
Indeed, now along with you is somebody you're in well,
i'll say, in business with. His name is Justin Matthews.
He is a senior lecturer in Digital Media at aut
He has worked in media and film, in TV design,
technology and the web and at one stage Justin ran

(05:49):
the New Zealand Herald website from the tech point of view,
and he is co substack writer along with Nigel for
a site called Creative Machinists. Justin, let me start with you.
What is creative machinists actually mean?

Speaker 4 (06:07):
Oh, that's a good one. It's actually marqinez which I
had to learn as well because of Latin, but it
basically means machine and it was a fancier way of
going creative machines basically kind of giving it that little
bit of edge or difference.

Speaker 2 (06:24):
Nigel, who started it? You or Heat?

Speaker 4 (06:26):
We started it together?

Speaker 3 (06:27):
Really, we both had an interest in AI, and we
also have both done a course on social media at aut.

Speaker 4 (06:38):
SO and we both actually worked at.

Speaker 3 (06:40):
The Herald Online in that period, so we've known each
other full long time and she had an interest in technology.
But there's a lot of people talking lateness, you know,
about AI, but a lot of them are just talking
about the tools and getting very excited about the next
variation of whatever tool is coming through. And I suppose

(07:01):
because of my journalistic background, I'm especially interested, as you are,
in the wider picture, how is this actually affecting life
or how could it affect life?

Speaker 4 (07:12):
And what are the implications?

Speaker 3 (07:14):
What are the ethical implications we seem to I've been
through the early days of technology startups and the Internet
from before it was even there was even a web browser,
and what I saw over that time was exactly the
same thing. For a while, Random magazine very successful Good

(07:35):
net Guide, which discussed all these issues as well as
helping you online. Whereas a lot of people in the
kind of tech world simply get excited about the next
big thing without actually taking a breath and saying is
this good for humanity? Is this something that we should

(07:55):
actually be pushing?

Speaker 2 (07:57):
So there is a good and a bad side to
AI as far as I'm concerned, would you agree.

Speaker 3 (08:02):
Definitely, there are certainly some good things. I think the
most exciting thing is that in terms of health development,
AI is definitely emerging as a very practical tool in
cancer intervention and treatment in a variety of ways. And
in fact, some health people are some of the health

(08:23):
people who are working in the UK and USA say
that AI is enabling them to do a faster approach
to working out how to cure some diseases and cancer,
and it looks as though it will have a critical
impact on medicine and the development of treatment. So AI

(08:46):
tools are definitely helping develop solutions for curing disease. But
there are other aspects of AI that perhaps need debate
and possibly regulation.

Speaker 2 (08:58):
When you talk about health being the most important at
this point of view. From this position, then there is
the excus ample, which is only very recent, of AI
telling someone to go kill themselves.

Speaker 4 (09:13):
Yes, actually that was very concerning. That was involving a
company called character dot AI. Basically there was an issue
where the person was talking to this AI and they
were getting more and more lost into their nature of
their reality, and they ended up deciding to the AI

(09:39):
basically ended up convincing them that their world wasn't real,
and so the person chose to follow the AI into
the next life or something and kill themselves.

Speaker 2 (09:50):
Charming, and I can imagine there are people who under
those circumstances, certainly some circumstances would follow through on that.

Speaker 4 (09:58):
Well.

Speaker 3 (09:59):
One of the big issues that I was watching is
today the Meet the Press program that runs on America
on Sundays, and the whole program is devoted to what
they consider to be one of the big issues in
America at the moment, which is loneliness and advanced robotics

(10:20):
and emotional AI is actually reshaping human connections so that
there are companies, especially one in China, that are producing
robotic companions. And one of the big moves towards developments
of AI is what have been called AI assistance that

(10:41):
will be after guide us through life. We have a
little bit of that at the moment where we can
ask sire on our Apple phones or Alexa if you
have one of those Amazon devices. But where this is
going is that people who need companionship are abandoning the

(11:02):
traditional dating apps, for example, and there are now very
popular dating apps in America which are AIS, so in fact,
you're not meeting real people, you are having a supposed relationship.
Now where that is going is that means that these
AI robotic robotics are actually suggesting to you what you

(11:26):
should be doing with your life.

Speaker 2 (11:28):
There's an extension to that mention you made of robots
and things. I want to leave that for the moment,
but for heaven's sake, don't forget. To remind me. If
I forget, Let's just work our way through some of
the some of the aspects of it, because that'll be
spin off from pretty much all of them. For instance,
AI is mimicking leading music artists and unofficial songs featuring them,

(11:52):
and they're being released on platforms like Spotify. How are
they getting away with that?

Speaker 4 (11:56):
Well, the way they're doing that latent is because there
is all this undisputed law at the moment about what's
legal and what's not, and that huge grade space I
suppose until it's defined, is going to allow the AI
companies and people building these sort of tools to create opportunities.
And then you've got people out there wanting to take

(12:18):
advantage of it. We don't know what's law yet for
what is considered to be true copyright around the AI stuff.
And so you have these people using these tools. Some
of these are using them to create interesting stuff. They're
being very creative. Some of them are fans and they
want to create effectively music based off there the artists

(12:41):
that they're fans of, and then some of them are
just doing it actually quite maliciously to create opportunities to
further their own gain. There was that recent case, for instance,
where this guy basically created this entire industry of fake
music from AI and also having AI bots listen to
it so that he could pack the streaming system of Spotify,

(13:05):
and it was basically able to take qus millions of
dollars of streaming fees. So, yeah, this is a really
interesting space where you've got this creative This is one
of the things we do with the creative mckinnest things.
We keep looking at all the material about where does
AI begin and human interactions start, and what are we

(13:25):
going to do as far as understand where those boundaries
should exist, because at the moment, no one seems to
know exactly where we should be drawing the line.

Speaker 2 (13:33):
Let me ask a question that associated with that that
occurred to me while you were mentioning it. Let's take
a singer, a singer that I'm going to mention, doctor
John because for two reasons. One is he's right up
there at the top with me and well with some
of his music. And secondly, he's now departed. He's dead.

(13:56):
Now I get the idea that somebody could reinvent him
through AI write new songs and have him sing them,
or would somebody else? How would all how would they
do that getting the voice to sound the same. And secondly,
the most important question is would I want to listen

(14:17):
knowing that it's nothing.

Speaker 3 (14:19):
There are some Michael Jackson songs out there which do
sound a little bit like Michael Jackson, because if you
put Michael Jackson into all the tools, you can then
end up doing stuff with Michael Jackson.

Speaker 4 (14:34):
I think what's interesting.

Speaker 3 (14:35):
I've talked a lot about this because in some years
ago there's been a little bit of this sort of
stuff going on. I don't know if you remember late
and there was a famous Natalie Cole album called Duets
with Natkin Cole, and the two of them sang the

(14:55):
songs together. But of course net Can Cole a departed
now that is not quite the same as doing it.
But I'm just saying that there over time there has
been a bit of trickery going on. Record companies use
a tool which is called auto tune, where sometimes they
can change make the voice better of the artist. This

(15:16):
is a whole different copyright game, though, where the tools
are out there where you can feed all the Michael
Jackson songs and then get Michael Jackson to create.

Speaker 4 (15:28):
A new song. And that is.

Speaker 3 (15:33):
The problem is that if you recall the early days
of the Internet, in which you were on board very early,
a regulation always takes a long time. Lawmakers take forever
to actually do something about things, so that a lot
of stuff happened on the Internet before lawmakers woke up.

(15:53):
And that's why we've now got quite a lot of
issues with social media because the lawmakers and regulators didn't
actually address it. And that is what's going on here.
As Justin says, the issues about copywriter in a very
gray area.

Speaker 2 (16:10):
But just going back to my question at the end,
would I want to listen to it knowing that it
wasn't Doctor John.

Speaker 3 (16:18):
Now, well, I'm a great fan of Doctor John too,
and I certainly don't particularly want to listen to doctor
John singing songs that he didn't do.

Speaker 4 (16:29):
I mean, I think that the answer to that is,
it does appear people are prepared to listen to songs
that are AI. The latest sort of statistics that are
coming out or people exploring this are saying that audiences
don't seem to necessarily have a big issue with the
idea at this stage that some of this stuff might

(16:49):
be AI. And this is another concern, I suppose, or
another challenge for us is a lot of artists, obviously,
and so a lot of artists are resistant to this.
There's a case of them writing open letters about wanting
to kind of stop some of this stuff, but it
doesn't appear audiences I can convinced that it isn't something

(17:09):
that they're prepared to be part of. And so your
questions specifically, I think the nature of it is that
audiences and certain of percentages of audiences are prepared to
listen to new music. The nature of that is that
we think that audiences are going to be into it.

Speaker 2 (17:27):
Okay, there's a connection with that with something else that'll
come up eventually. Let's get back to some other aspects
of it. We know that when it comes to the
military there are going to be some very big questions
asked now I mentioned maybe I didn't mention this one.
Another headline, CIA says winning tech war with China top priority,

(17:51):
citing existential threat to the US is basic AI likely
to become an existential threat to humankind.

Speaker 4 (18:01):
I think that there's a lot of talk about the
doom of that, and look, the reality is that there
is a percentage of truth to the idea that AI
might end up destroying us, and a lot of people
in the field of artificial intelligence development, the people that
are working at Google, Open Ai, Anthropic, these are sort

(18:22):
of these leading companies exploring these AI tools. They do
believe that there is a percentage of truth to the
idea that AI might destroy us. But it's again, like
anything with probabilities, it's a bit of an unknown. There
is a lot of safety that goes on around the
use of constructing these AI technologies, and there probably is

(18:46):
more of a case of it being beneficial to humankind
rather than destroying us.

Speaker 3 (18:51):
I think the warfare though Laton, is definitely one that
does raise some serious questions. I mean, fully yearsigned from
that movie The Terminator about killer robots, we now have
autonomous weapons warfare, reshaping global military strategy and also determining

(19:11):
who wins and loses. We've got AI, we've got drones,
We've got autonomous weapons being deployed in the real life
conflicts at the moment of Gaza and Ukraine. The Israeli
Defense Minister Director General says that over the next decade,
AI robots in land and sea will be the ones

(19:33):
who are actually doing modern warfare. He says that it's
a complete game changer and the future battlefield will be
very much unmanned systems fighting together.

Speaker 2 (19:48):
I read somewhere that there is likely to be a
combination for some time of shall we say, AI and
real life on the battlefield, battalions made up of a mix.

Speaker 4 (20:04):
Yeah, there's pretty much the play that will occur for
at least next decade. The systems are not powerful enough
to be completely autonomous, like Nigel mentioned the Terminator, We
certainly haven't got that level of technology. But the integration
of artificial intelligence machines, combat machines and humans is definitely

(20:25):
the future. I mean, the militaries currently in the US
have been working with these autonomous fighter jets that effectively
they basically fly with a main fighter jet, like a
leading fighter jet, and they will basically just be following
it and then going ahead and sort of scouting out

(20:48):
locations and fighting with them. So I think these kind
of combined swarms of AI with humans is definitely the
future that is currently in play.

Speaker 2 (20:58):
Do you remember those a few years ago? There were
news items with and in movies with flock of bird
like creatures, but they weren't they were would we call
them AI? But let's let's do so anyway, they were
chasing people through the through the jungle and killing them.

(21:21):
What's happened? What's happened to that? Do you know any
idea because there's been nothing mentioned of it in the
last well two three years.

Speaker 4 (21:30):
I'm not sure which one you're specifically talking to, but
it does sound like the drone stuff, I mean, the
the This is one of the this is one of
the big concerns about the automated artificial intelligence military stuff,
and that is these these swarms of AI drones that
kind of do what you were just saying. They effectively
create a swarm of machines that I that you cannot escape,

(21:55):
that they kind of can swarm around you, they can
track you down, and it doesn't matter how agile you are,
and as you can point out, if you're running through
a forest, you'd basically still be able to be wiped out.
And that's scary because that kind of sort of swarming
technology is a sort of new category of danger when

(22:16):
it comes to military conflict.

Speaker 2 (22:18):
So there's another scary whichever way you look at it,
and that is a suggestion that AI could replace politicians.

Speaker 3 (22:28):
Well, that is definitely there are some people who are
suggesting that there are a number of ways that AI
could certainly get involved in politics. There's a belief that politicians,
for example, could use chat bots which they could discuss

(22:53):
with their constituents resolve disagreements, and also AI could be
used to write legislation. The appeal of replacing flesh and
blood politicians with chatbots is definitely being seriously advocated. One
more radical idea, which some are suggesting, is actually eliminating

(23:16):
humans whatsoever from politics. And we know that some politicians
are not terribly bright or doing great things. If AI
advances to the point where it can reliably perhaps make
better decisions than humans, then there could be an area
that it certainly could be explored. According to some enthusiasts.

Speaker 4 (23:38):
They're calling it the algrocracy. I think it's pronounced, which
is basically algorithms running government.

Speaker 2 (23:47):
I've done about you, guys, but I have no time
for tolerating that at all, none whatsoever. But it's frightening
to think that it somehow it could happen, and maybe
you'd like to trace a possibility trail for it.

Speaker 3 (24:03):
There was a candidate that stood in Sussex in the
UK elections and he actually actually he created an AI
chatbot and so it was called AI Steve who was
actually standing. His argument was that it meant that politicians.
He had created a politician who was always around to

(24:26):
talk to constituents and who could take their views into
consideration and do better research to create better policies. He
didn't do very well in the elections, so there's probably
the answer to your fears later.

Speaker 4 (24:41):
I think the way to track that, though, is I
think you're correct. It is very concerning the idea of
allowing ourselves to be subject to algorithms as a way
of governance. We already have seen quite frankly, the danger,
or I just think the disrupt the destructiveness that algorithms

(25:03):
have created around social media experiences, and the nature of
how that is often leads into problems around the way
the algorithm presents content information, misinformation, disinformation. It isn't a
big stretch to think that. The thing that people don't
quite understand, I think is that we are already engaged

(25:24):
in a lot of stuff that's to do with artificial intelligence.
The social media algorithms that we sort of a lot
of people engage with every day are using artificial intelligence.
They're not They're not just sort of what you would
call these simplistic algorithms from previous times. They are using
artificial intelligence in the way they're operating. So we already
see that it's not necessarily very great there extending that

(25:47):
into arms of government. I'm with you, it's extremely concerning
and something I also would be very adamant to push against.
You know.

Speaker 2 (25:55):
It's like it's a little like some medical matters that
over the years have caused consternation. Artificial insemination springs to
mind in the very early days, and I had conversations
with one doctor who I was somewhat friendly with conversations
on radio, and he took the He took the attitude

(26:17):
which many people, if not most, might say, is the
logical one, and that is it's not up to them
to make the decision on how something is used. It's
just up to them to provide responses to some questions
and issues that are begging.

Speaker 3 (26:35):
And sorry, go on, no, no, please go ahead.

Speaker 2 (26:40):
Well if you take, if you follow that path, we
have now advanced, if that's the word, down a trail
where there has where there is much more going on
with reproductive matters than back in those days, and there
is no knowing where it might stop. I mean, this
has just come to me. It's not I never thought

(27:02):
of this before. What about the implanting of a chip
in a baby's brain while it's still in the womb
for whatever reason.

Speaker 4 (27:08):
Well, I mean the answer to that is that sort
of capability isn't possible yet. But we know with Elon
Musk's Neuralink and other technology companies that the idea of
planting chips and brains is already underway. Like what you're
saying is the argument, I think is that we if
we don't, if we don't have a line to hold,

(27:32):
then we are in danger of allowing things to just
sort of continue. And one could say, oh, well, this
is the old slippery slope argument. But I do think
you touch on something important. In contrast to like you've said,
with the artificial insemination issue and the discussions with the doctor,
I think for AI, we do need to be very
heavily in the debate as individuals, as groups of society.

(27:55):
I think the difference here goes back to one question
you asked earlier, laden, which is there is an exits
there is we talked about it. There's that percentage of
an as extential threat to humans. I think that this
is the one where we probably need to therefore obviously
be in the debate, because I think taking, say, a
position of let's just see where it goes, is not

(28:18):
a healthy one because of the nature of the of
the danger that could come from it. And I actually
think this is why we all need to be discussing
where AI sits in our lives, how much of it
we're prepared to accept, and where we should be accepting
its its inclusion and pushing very hard against its exclusion.

Speaker 2 (28:41):
That's intriguing, and I'm pleased to hear you say it,
But I think of things that are out of our control,
as in some in some parts of the world, they
legitimize activities that I'm talking aboutically that are banned here
or that we don't accept. And once you're once you're
on the path of developing something and you've you've got

(29:04):
an open gate, then there will be minds somewhere who
will what to utilize it and develop it. And while
we might agree that we should be involved in it,
it may be taken away from us.

Speaker 4 (29:18):
This is true, and this is where things like what
China is doing has some concern because I think it's
fair to say that China has a different perspective on
the approach to the way they're integrating and using AI.
They're obviously going to be pushing more of that into
their social credit system that they currently have where people
are tracked in kind of real time. China is basically

(29:42):
your point about they having a different view of the
way AI will be integrated into their social structures. The problem,
I think is that America's response right now is a
little like, we need to make sure we're ahead of
China in many ways, but we may the hope that
they continue, all all societies continue to push back against

(30:04):
what that particular approach is going to be for us.
You know, there have been arguments that we may go
down the track of trying and integrate it too quickly.
I personally have a problem, for instance, if we just
go back to a local matter in New Zealand, I
have a huge issue for me with the fact that
supermarkets are trailing facial AI facial recognition systems so that

(30:30):
you enter the supermarket and it sort of effectively tracks
you and knows who you are, and they're able to
deploy security guards to push people out because and then
we've had several incidences where it's been false positives. I mean,
this is the kind of yeah, this is the kind
of problem I think where we do need to be
very vocal and we need to be pushing back because

(30:51):
it's exciting technology, but it's not one we should be
just adopting without question because you can, because you can. So.

Speaker 2 (31:03):
I refer to a lengthy piece that Frank Faraidi, the
philosopher wrote back in twenty one headed we need skepticism
now more than ever.

Speaker 3 (31:17):
You agree, totally agree, Laton, and I think the real
issue that we just need to address, which you referred
to with politicians, as you know, how much can we trust?
I don't know if you remember the nineteen sixty eight
classic sci fi movie which was two thousand and one
A Space Odyssey, which was actually an early example of AI,

(31:40):
where how the computer locked out the astronaut and said,
I'm sorry, I can't do that. I can't open the doors.
You know, The issue is what happens when AI goes
rogue or whether it gets so clever that in fact,
these sort of issues happen where we are starting to

(32:00):
be totally controlled by AI, and that's already happening. A
new wave of so called reasonings from companies like open
ai has started to produce incorrect information more often, and
the companies don't really know why, and they're politely calling
the sollucinations, but they don't actually know what why incorrect

(32:25):
information is coming through and it could have serious implications.

Speaker 4 (32:31):
Well worse than that, actually, I mean, the reality is
that these these GPT chat style systems do show a
certain level of intelligence that it can be concerning. There's
a there's a whole process in AI development called red
red teaming, and the point of red teaming is to

(32:52):
take a model that's been freshly baked, let's say, and
check that it has adhered to certain safety requirements and
that it is not acting in a way that is
destructive or dangerous in some form. But recently, for instance, Anthropic,
which is one of these technology companies that's competitive open AI,
they create a whole bunch of AI models. They recently

(33:15):
were explaining how one of their latest systems they had
to do quite a bit of work with because it
was doing all sorts of things like trying to blackmail
the developers so that it would stay alive when they
were they indicated they might be changing it. It tried
to read their mail. It basically engaged in all of

(33:37):
this sort of nefarious activity to kind of keep itself alive.
This is the type of mechanistic stuff that is going on,
and we know we needed it careful about making sure
that this is always something we're looking at effectively, because
if that's this version of aill, what does it mean
as we continue to increase and develop the technologies exactly

(34:00):
justin how would that opus as their known, How would
it have arrived at that decision, How would it have
come across blackmail and utilize it or attempted to. This
is the part that is where we're not sure. Actually
one of the big admissions is that this is an

(34:21):
emission from the technologists, the people that develop AI, especially
these GPT chat systems, is they don't know what goes
on inside. It's a black box. And this is another
reason why it's fair to have concerns because it's a
bit like playing with the button of a nuclear switch

(34:41):
where you don't actually know it's connected to a nuclear bomb,
because we don't really know what's going on inside. So
to answer that point is simply to say they're not sure.
What it does seem to be showing is some sort
of rudimentary survival tactic where it's understanding that it's under threat,
and if it's under threat, it needs to take action.

(35:01):
And this is where that red teeming tries to shake
out the nefarious or darker elements of what it's trying
to do to protect itself, but also indicates that there
are aspects of this that cross into that question about
what does it mean for something like that to actually
be intelligent. There is this on going claim that none

(35:21):
of these systems yet are what you would call intelligence,
but one does have to ask, well, then why does
it know how to try and blackmail and protect itself?

Speaker 2 (35:30):
If you let your mind run away with you. It's
actually frightening. I I want to quote you something. I
don't know whether you've come across Tyler Cohen. No neither
did I. This was published just a few days ago,
and this is how it begins. This is the most
important essay that we have run so far on the
artificial intelligence revolution. I'm excited for you to read it.

(35:51):
There's the author. Are we helping create the tools of
our own obsolescence?

Speaker 4 (35:57):
Now?

Speaker 2 (35:57):
If that sounds like a question only a depressive or
a stoner would ask, let us assure you we are neither.
We are early AI adopters. We stand at the threshold
of perhaps the most found identity crisis humanity is ever faced.
As AI systems increasingly match or exceed our cognitive abilities.
We're witnessing the twilight of human intellectual supremacy, a position

(36:21):
we've held unchallenged for our entire existence. This transformation won't
arrive in some distant future. It's unfolding now, reshaping not
just our economy but our very understanding of what it
means to be human beings. We are not doomers, quite
the opposite one of us. Tyler is a heavy user

(36:42):
of this technology and the other Abbotel is working at
Anthropic or Anthropic the company that makes Claude to usher
it into the world. Now, how does that affect you?

Speaker 4 (36:55):
I mean the answer is that this is the part
we need to be vigilant. And I think that's what
these articles, these commentries from actually professionals that are already
doing stuff in the industry is we need to be vigilant.
And it's interesting that he has to predicate this by
saying we're not doomers, because it's very easy in this

(37:15):
current space to dismiss these concerns and almost be a
bit derogatory by going, oh, you're just a doomer because
of the techno enthusiasm over the technology. But the reality
is we do need to be vigilant, and these kind
of articles are ones where the issues brought up need
to be seriously considered. It is a category shift for us.

(37:39):
The thing I just mentioned about the opus tool knowing
how to kind of protect itself, the nature that red teaming,
then the whole idea that red teaming has to exist
because the technologies are not necessarily baked with human values
in them. These are all reasons for vigilance and that
we do need to not just be overwhelmed with the

(38:03):
techno enthusiasm. There needs to be some techno dystopia here.
We do need to go into this with an eyes
open of like asking hard questions.

Speaker 2 (38:13):
Let's get down to symbol sorry, sorry night.

Speaker 4 (38:16):
One of them.

Speaker 3 (38:17):
One of the takeaways from the course that I did
which really shocked me was the fact that there is
so much talk with these experts about AI replacing the
human mind. And Google's leading AI scientists said just this
week he believes there is a fifty percent chance that

(38:37):
will be developed by twenty twenty eight. Now the experts
are talking about twenty twenty eight to twenty thirty, which
is only a few years away. Here's the worry about
Here's one of the worries about that whether AI can
outsmart humans or whether AI actually is developed to the
point whereas making decisions that humans should make. And you know,

(39:01):
my worry is that we all make decisions, sometimes good
or bad. We're humans. We don't always get it right.
But weally a company making a major decision in our
life with some sort of emotional response as to how
that's going to affect people around us. And that is
missing from something that's mechanical and is a technology tool

(39:24):
that a doesn't have the life experience that we have,
which we also draw on to make those decisions. But
it also doesn't have that emotional impact as to what
how that could act.

Speaker 4 (39:36):
And you know that goes back.

Speaker 3 (39:37):
To your comment about an AI tool saying kill yourself.
You know you wouldn't say that to someone. You would
discuss the depression or their problem and do it with
some empathy and try to work out a solution, whereas
the AI tool didn't do that. And you know, my

(39:58):
worry is that these experts are rushing to try to
match what they call the human mind. By twenty twenty eight.

Speaker 2 (40:08):
How about the development of AI to the point where
the one off, as far as we know, go kill
yourself becomes rampant. Then you've got the normal spread of
influence that would hit some people more than others, some
groups more than other some ages, et cetera. Of just

(40:29):
adopting it and saying, well, you know, go kill yourself,
to you, to you what used to be your best matal,
It would influence and infect society.

Speaker 3 (40:41):
Again, going back to that example I did of having
an AI companion. If you're lonely. Maybe you're getting on
in years, you look by yourself and you rely entirely
on this person that you've actually formed a sort of
emotional relationship with because you're able to talk to somebody
that you otherwise not having much social contact. Again, that's

(41:04):
the real danger that this person could well start to
influence important decisions in your life, which may not be
good ones.

Speaker 4 (41:14):
I mean, the reality is we're on the cuspiple so
latent of some new devices. That recent announcement of Open
Ai buying Johnny ives Io company and because they're going
to start looking to release these AI devices of some form.

(41:35):
But I think what is clear about these devices is
they are going to be ambient AI, which means it
will be some type of device you are wearing. It'll
be wearable, and it is basically tracking you twenty four
to seven full time. It's kind of got a situational
awareness about your life. And also the impact of that

(41:59):
is that anyone else interacting with you clearly at that time.
So we're going down a path where AI is going
to suddenly be front and present in our lives for
people very quickly. Because I hate to say it, but
I suspect that these devices are going to be enthusiastically adopted,

(42:21):
maybe because people aren't asking questions enough, but also because
they do probably shortcut a lot of activity that would
normally take longer. But this goes to your point about
it being kind of exposed to it quite regularly and frequently,
but suddenly it will be a persistent thing in our
lives very well.

Speaker 2 (42:40):
Said from the article from Spectator this current week, The
age of AI and nuclear energy is coming. The anti
nuclear conglomerates are promoting windmills and solar panels, which have
proven to be costly and inefficient. They're also promoted they've
also promoted green hydrogen, which has proven to be even

(43:00):
more costly and so inefficient that every project the Labor
Party has subsidized since coming to power in twenty two
has failed in close and there's a lesson in that,
I think anyway. Meanwhile, electricity prices are at their highest,
energy production is at its most inefficient, productivity levels are
at their lowest, supply chains are at their lowest, and
housing prices are at their highest, which also means rents

(43:23):
are at their highest levels ever in Australia's history. As
our food prices young people have every right to be concerned.
The major driver of economic growth, real wages, and overall
living standards is labour productivity, which is the barometer of
economic efficiency. The Australian Bureau of Stats recorded that from

(43:44):
twenty sixteen to twenty four labour productivity growth has reduced
from one point seven to zero point nine, and the
trend is continuing downward. This lowering of the standard of
living has coincided with the gradual increase in energy prices
such as electricity and petrol and petrol prices. This is

(44:05):
not a coincidence. Now, all that might seem alien to
what we're talking about, but there is a connection with
it with nuclear power, which the left is opposed to,
and the standard of living that people are going to
be sucked down to if they keep on this same path.

(44:30):
Is there an AI answer to this, Well.

Speaker 3 (44:32):
I think. I think the elephant in the room remains
the fact that what is going to actually happen with
people's economic power if AI takes the jobs and artificial intelligence.
According to many reports, PwC, McKinsey, World Economic Forum are

(44:56):
all predicting that AI will fundamentally transform the global workface
by twenty fifty. Again, that's not very far, and the
estimation is that up to sixty of current jobs will
require significant adoption or adaptation because of AI. Now, the

(45:16):
issue is already routine and repetitive jobs that people do
are being replaced because corporations are seeing that they can
save money by getting people to do data entry and
all sorts of other routine, repetitive jobs. So what is
going to happen to these people? Laton? If you're going
to have large numbers of the population who are no

(45:40):
longer going to be able to have a job, but
also they won't necessarily have the skills to adapt to
another job because automation and AI tools will be doing
so much of the stuff that they're capable that they
might be capable of doing. What are they going to do?
Are they going to be sitting at home? Are we

(46:01):
going to have to increase benefits so that sixty percent
of the population end up with a government benefit? I mean,
has lawless economic implications?

Speaker 2 (46:12):
Well that's what somebody was saying to me the other day.
What's going to happen to what's going to happen to
our jobs? And where can I go if I lose mind?
This individual because I'm not trained in anything else.

Speaker 4 (46:28):
Look, I do think jobs are definitely going to change.
It's transformative. I think it's disingenuous not to be blunt
that we are going to see transformational or huge shifts
in what it means to be employed in certain sectors.
Part of what we do in that newsletter is explore
one of the things that we thought never would happen,
which is that AI actually is pretty good at doing

(46:50):
creative stuff. So we've got a whole concern around even
the creative industry stuff. But I also think that in
every great technological shift, there has been a discussion about
the loss of jobs, and we've lost them, we will
also get new jobs. And Look, I think that for
all the conversation around the negative concerns and the dangers

(47:13):
of AI, I think that there is a lot of
positive opportunities for us to create interactions that may well
solve problems. I mean, your part of your question was
what could AI do around all these crises? Look, AI
does bring the opportunity to be able to look at
information in ways that generally human beings can't and solve

(47:33):
problems in a way that we aren't able to. Some
of the exciting stuff, for instance that Google's done with
cancer research, as Nigel it picked up at the front
of this, and also even just understanding some of our
climate problems. All of this is capable, but in that
we are going to lose jobs. But I think new

(47:53):
jobs will come out of this. What those will look
like like in any transformational shift, we don't know. I
do think that. I was at a forum the other
day where I was asked to speak to high school
leaders about some of the stuff to do with AI
and the fact that it's becoming part of the territory
and what does it mean to look for a job,

(48:14):
And there was some concerns and I said the same thing.
I said, Look, jobs will change, but I think we'll
see new jobs. One of the things I did say
was that I think we do see a future where
I believe if we have a good pathway here, humans
and AI will work together. I think that we will
see sort of hybrid jobs where neither can do it individually,

(48:35):
but you're working together. There's a great anecdote in a
book called Range where Gary Kasparov, the X chess master,
when he lost to Big Blue and finally the machine
beat a chess Master. He didn't just go away. What
he did was he actually went off. He got really

(48:56):
into it, and he created this whole exploration with AI
and humans. And what they did was he actually came back.
And this is not that well known. He came back
and he took on Big Blue again, but he took
took on Big Blue with AI himself, and the two
of them together beat the machine every time. Human beings
working with AI together were unstoppable. And I think there's

(49:18):
an interesting thing in that about some of the buoyancy
or the positive nature of the way that AI and
humans might interact with each other.

Speaker 2 (49:26):
There's a question that I haven't heard really asked, before
working together creates relationship, what sort of relationship would you
get back from an AI operative? So you work with
somebody and you become good mates, and you talk about
all sorts of things, and you might get down the
pub to get whatever. How would you get from any

(49:47):
sort of any sort of relationship on that basis from AI.

Speaker 4 (49:52):
Yeah, I understand what you're asking, and I don't think
we quite know there will be a relationship though. I
think that to do that there needs to be some
sort of interpersonal mechanism. To be fair, some people are
using these current AI bots in a very collaborative way.
They're actually engaging with it and talking to it as

(50:12):
if it is another partner in the room, and because
of the nature of how these chat GTPs can work,
they are engaging back in a way that seems fruitful.
I mean, like I use my open AI chat GTP
quite a lot like a collaborator. I'll have disagreements and
explore ideas with it. I'm very clear that it's a machine,

(50:35):
but it does have this thing where at times it
strikes me because it talks back to me like it's
a human being. It has responses that are very intrapersonal
at times, which I do take pause at at the
moment because it makes me think, Okay, this is a machine,
this is not another person. But I think interpersonal relationships

(50:57):
with machines are on the horizon as well. Whether that's
a good thing or a bad thing. I think we
aren't quite sure as a society yet, because I could
go either way.

Speaker 2 (51:10):
Well, I don't know whether I'm interpreting you correctly, but
I mentioned earlier that we'd come back to something so
on the basis that I think you have touched on
it deep fake technology. There was a weekend story from
Australia from Sydney, I think in the Australian newspaper with

(51:30):
regard to the school kids around the age of twelve
using deep fake technology, which you might like to expand
on a little, but using it to show the girls
in the classes in all sorts of compromising positions, starting
with being totally naked.

Speaker 4 (51:49):
Yeah. So this first thing deep fake technology, just for
the audience is on and the listeners, is basically the
capability of taking someone's likeness and being able to project
that likeness onto existing video or in the case of
what you're talking about and what's now the more sophistical,
de catered version of deep fake technology being able to

(52:11):
take a person's likeness and create completely generated new video
material that was never obviously created. So completely fake. And
the nature of this, yes, is that it is being
used to create pornographic experiences and also even potentially blackmailing

(52:32):
experiences because they are able to take in this case,
the girl's likeness or the girl's likenesses and they're able
to create put their face on basically pornographic material. And
this is why a lot of states in America especially
have started jumping on and creating deep fake laws. This

(52:52):
was to be fair, they started this when it was
still rudimentary. But we're going down a path now where
the latest launch of a generative video technology from a
Google called vo three, this is the third version of
it can now create video that's circulating as of this
week where it is almost impossible to tell that it

(53:15):
is not real. The video technology lets you put a
prompt in and it will generate the video. It generates
the dialogue, and it generates the sound effects to create
something that by all looks is effectively a real video
that's been created. Now you take that into deep fate
territory and we have a really concerning space.

Speaker 2 (53:37):
Indeed, the other aspect of that is the is the
creation of AI robotic companions that are x rated. In
other words, they become they become sexual partners, and you
cannot help but have picked up over the years when
advances have been made in that particular area. But now

(53:58):
you throw AI in there and it's a whole new
different scenario, good, indifferent or bad.

Speaker 3 (54:05):
Well, I think again it's a worry. As I indicated
with companions is that you know, it plays to people's
and securities, people's loneliness and so forth. I mean, I
think it's interesting technology. Porn has actually played a part
in just about every development of technology. In the early

(54:29):
days of the Internet, it was actually when it was
a bit of a Wild West show. It was the
availability of sexual material that helped actually fuel it. If
you remember the early days of VCRs, that was exactly
the same. So here we have a new technology where porn,

(54:50):
the peddlers of porn, if you like, latching onto it
and trying to make hay from it. This is as
I say, I think the question of companionship using AI
is going to be a really major issue because if

(55:12):
you're replacing human beings who can make sensible emotional decisions
with assistance in some ways, that maybe encouraging bad stuff
or in fact putting you in a space where which
is not good by encouraging you with thoughts and ideas

(55:35):
that aren't helpful that other humans would not suggest. You know,
I think this is a real worry. But again we
come back to the thing who's controlling this late and
I mean again, nobody is waking up to this. The
regulators aren't waking up to this. It feels as though
the technology companies are just going as fast as they

(55:55):
can to develop you know, human minds replacements. By twenty
twenty eight. No one is actually putting any brakes on this.

Speaker 2 (56:05):
So who's running it, who's organizing it?

Speaker 4 (56:09):
Well, well, the big tech companies.

Speaker 3 (56:11):
I mean, it's all about money, isn't it.

Speaker 2 (56:13):
And again, as you say, but Nigel, is it is
it totally about money or is it is it about
I mean, take Bill Gates, Well, it's controls Bill exactly.
Bill Gates doesn't need any more money. He would be
though interested in as you as you just mentioned, in
control and one thing leads to another. The two go

(56:35):
very well hand in hand for a lot of people.
Look a look at George Sorris. So are you suggesting
then that they're the sort of people who are behind
pushing this.

Speaker 3 (56:50):
Well, I you quoted in an article which see it
that whoever wins c AI race will control the world
or something along those lines. I mean that that is
what the kind of belief is. I mean, as justin mentioned,
there's a huge race going on between America and to
try and win. You know, they see it as who's

(57:11):
going to control Who's going to win this, but what
are we actually winning? What do we mean by winning?
You know, are we taking a step back and say,
what do we mean? Does that mean that we will
end up with these assistants that are being developed. The
person that Justin was talking about is the person who
the genius at Apple who created the iPhone and Apple Watch.

(57:33):
He is the one now developing a major AI assistant.
And his talk that in ten years time the iPhone
will be dead, that we'll all be carrying around some
sort of lapels something now whatever, something that will be
telling us what to do, and we're asking it questions
and it will be directing us.

Speaker 4 (57:54):
You've talked about which.

Speaker 3 (57:56):
Sounds crazy sci fi, but you've talked about the real
musk type enthusiasm for something that goes into our heads
and some sort of thing that controls us. So, I mean,
this is the science fiction era that we're actually moving

(58:16):
into reality now. And there are definitely forces out there
that are all about control, and they're quite right. Whoever
can get an AI assistant to tell you and me
and Justin and everyone listening what to do and how
to do it, and what to buy and where to
go and all the rest of it. Surely that's you know,

(58:39):
that's in a completely different world, I.

Speaker 4 (58:41):
Mean later and there is a narrative out there which
I think is worth mentioning, and that is the idea
that there's a kind of developing AI cold war between
China and the US and this sort of who is
going to be the best to control it or even
as we touched earlier in this podcast, is whose vision

(59:03):
of the world is going to win out for AI?
And I think the forces which talking about here with
with what you're asking, what Nigel's mentioning, is that there
is if that is part of the narrative or the play,
and we know we've also talked about the s extential layer,
then then all these threats do push forward the idea
that it's kind of going to be keep it's going

(59:24):
to keep getting developed. But whose vision of what that
means is is not clear yet.

Speaker 2 (59:29):
Does it come down to whoever drops the first bomb?

Speaker 4 (59:32):
Well, it wouldn't be a bomb, though, I think that
it's more likely to be some sort of weaponized interaction
on an AI level. I mean, there's a lot of
talk about the idea of a third World war, but
I don't necessarily know who that we'll be seeing bombs
necessarily in that. That's why I think the AI cold

(59:54):
War is an interesting narrative because it's it's almost like
a different type of battle that isn't that isn't drawn
necessarily by clear geopolitical lines.

Speaker 3 (01:00:04):
And is that a battle for your mind?

Speaker 4 (01:00:06):
Layton?

Speaker 2 (01:00:07):
There are two other areas that I've got tucked away
that I know I want to engage with. One is education.
Is AI a future class teacher?

Speaker 4 (01:00:20):
Yes, I think so, being a lecturer. I think this
is something that is talked about currently at inside aut
and it's something all institutions but also down to the
high school and primary school levels about where AI fits
in this. The way I see it is, education is
also up for grabs, first of all in the whole

(01:00:40):
nature of what it means to have artificial intelligence working
in that space. I think teaching is going to change.
I think it will put pressures on all the layers
of education that we have. But I do think it
is going to be a hybrid situation. In fact, I
would almost say earlier when I talked about the future

(01:01:02):
of jobs kind of being hybrid jobs. I think education
is probably the first sector where we will see some
realization of this very quickly where you have a teacher
and you have an artificial intelligent machine or GPT or agent,
that is that they're working in concert to help educate.

Speaker 2 (01:01:25):
So how long would you put on it before every
every kid is being influenced by an AI whatever.

Speaker 4 (01:01:32):
Well, I would argue they're already being influenced actually because
one of the reasons the education sector has to get
on board in some form or maybe a better way
to describe this is they have to be there so
they can control some of the narrative is because kids
in high school and primary school are already engaging and
playing with these chatbots, and they already are effectively learning

(01:01:56):
faster about how they work and what they can get
out of them. They're necessarily even the educators that are
older generations, and so we're already they're already being influenced.
It's already had. We're just got to be part of
the narrative.

Speaker 2 (01:02:11):
Is parenting over.

Speaker 4 (01:02:14):
No, I don't believe so. I think that there's a
fundamental interaction in parenting that and my current belief AI
can't replace. I mean, this is a whole bunch of
factors like presence, daily interactions but it may be substituted
to some extent. That's again part of these concerns.

Speaker 2 (01:02:35):
Okay, So the other area that we haven't touched on,
and I want to because this is important to I
think all of us, everybody. The use of AI in
journalism does that raise ethical questions?

Speaker 3 (01:02:50):
It certainly does latent such as the potential for a
bias if you're using AI algorithms. And there's already been
a problem with some of the AI chetbots that actually produce.

Speaker 4 (01:03:06):
A bias.

Speaker 3 (01:03:08):
And we already have issues in the media that we
know of where many people are concerned about bias, not
in in the media.

Speaker 4 (01:03:17):
But also in.

Speaker 3 (01:03:20):
Some of the narratives that are being driven. There's also
a need for transparency in AI generated content. There are
already news sites that are being controlled by AI. AI
is deciding what.

Speaker 4 (01:03:35):
Clickbait might be.

Speaker 3 (01:03:39):
The best to push.

Speaker 4 (01:03:40):
There are already stories.

Speaker 3 (01:03:42):
We've had examples where strange stories have been appearing in
news sites and even in newspapers, where the AI has
produced stuff that is not accurate. I mean the problem.
The problem is that if you ask the chatbot something,

(01:04:04):
or you ask it to do something again, as I say,
it's making mistakes. It can make mistakes, and so if
you're handing over aspects of journalism as some newspeople, we
know the news media is under economic strain, so a
number of media places now are handing over some aspects.

(01:04:29):
For example, they're handing over news articles on very routing topics.
Media releases are already being processed on one of the
local news sites by AI.

Speaker 4 (01:04:40):
Financial reports, sports.

Speaker 3 (01:04:42):
Schools, weather updates, all that stuff are being asked to
AI to produce, and that way you don't need humans.
They're also analyzing very large data sets that could be
helpful because if you're investigating something, it can search for
stuff very quickly and come up with stuff, so it
can help that. The danger is two things. One is

(01:05:06):
the writing of news stuff are without journalists. And secondly,
delivering personalized content to readers based on their interests and
reading habits.

Speaker 2 (01:05:17):
You know, that's something I just don't want to know about.
I resist having, and I've got enough already anyway, but
I resist having the sites that send me what they
think I'm interested in, and I won't sign up to them.
You know, every every newspaper's doing it now. I just
canceled my London Telegraph subscription. A because because it went

(01:05:40):
through the roof, and b because because there was sending
me stuff that I'm that I didn't care about. But
I want, I want access to it so that it's
my choice. I don't. I don't want them deciding what
they think I should see and read.

Speaker 3 (01:05:54):
And I can't understand why this has been pushed so
heavily later because there is such an issue at the
moment with trust and media and it's been widely debated.
You know, it's being debated vigorously in America at the
moment because a CNN journalist that didn't cover the demise
of Biden is now making millions from a book which

(01:06:18):
suddenly has discovered that Biden had cognitive issues. I mean,
it's a major issue. If really one in three New
Zealanders trusts the media, then why are we now giving
it over to AI to write stories for us or,
as you say, decide what you want to read, which
means that you don't actually see stuff that may be

(01:06:40):
important to read. And that's the real worry with this.
If AI is deciding what the homepage of a news
site should be, are we actually getting stuff that we
need to actually see? And think about and cover issues
that we actually need to debate.

Speaker 2 (01:06:57):
How are we going to be able to continue to
think of the future generations in the critical manner because
it's all being taken out of them, all being taken
out of their hands.

Speaker 4 (01:07:09):
Well, this is it. And I actually think one issue
for me around the AI being incorporated into journalistic stuff
is that we've already seen, as I mentioned earlier, some
of the issues around algorithms having full control of content,
which is what goes on in social media spaces, and
the echo chamber problems and the nature of misinformation. I

(01:07:30):
just can't help but feel like there's a lesson or
we learned there. So handing over actual what you might
call legitimate news sources into this AI system is concerning
on even the nature of how you go back to
your point, it starts deciding what it thinks is everyone
needs to see. I mean, this goes back to something

(01:07:52):
I feel we need to be careful about, and it's
easy for that line to be unclear, which is that
human beings are still awesome at creation curation. I should say.
The clear thing that GPT intelligence chatbots can't do is
that they're always looking backwards they're not. They're not great

(01:08:13):
at well, they're not good at at the ability to
kind of look at something and know that this is
a way forward, or this is what we should be promoting.
That's the human curation process of anything. AI is making
that demarcation clearer every day that it's really rubbish at
this and that what we do need is to keep
that human being in the curation and the decision making processes,

(01:08:36):
and so when it comes to things like information sources,
this is a line I think we should be holding.

Speaker 2 (01:08:41):
So the future of AI in everyday life, give me
your perspective as to where it might be. Well, you
picked the time period I want to. I don't want
to put it on you, but there it appears that
it's moving forward, if you like, at a greater greater rate,
greater speed than we anticipate it.

Speaker 4 (01:09:03):
Well, let's just pick twenty thirty because it's a nice
five years from now. And I think by twenty thirty
a majority of us will have some type of persistent
wearable AI device on us, much like a smartphone. I
think AI will have integrated itself into all aspects of

(01:09:24):
our everyday lives in a kind of active and passive,
so they'll be passive what might call ambient AI, where
it's sort of happening in the background and just kind
of making things easy would be the positive look, but
also active. Going back to a lot of the part
of the conversation, I think people will have active intelligent

(01:09:45):
agents they have a communication with that they interact with
from a spectrum of how's your day going, or give
me the weather, through to full relationships. And I think
we're going to see I think in five years by
twenty thirty, none of us will quite recognize the world
as it is right now. I think it will be
dramatically shifted by having what you I might say, here's

(01:10:09):
a way to think about it. We haven't met aliens yet,
but I think in five years by twenty thirty, we'll
have aliens on Earth and they'll just be effectively intelligence.
Artificial intelligence will be kind of like imagine aliens turned
up and integrated our society. I think that's the kind
of future we're going to be seeing because it will
be that prolific and it'll be that transformational.

Speaker 2 (01:10:33):
Nigel before, well, you want to have an input there,
because I've got a question off the back of that progestion.

Speaker 3 (01:10:41):
Yeah, No, I absolutely agree with that, I think, and
that's why I'm urging that we really need I'm really
glad you're discussing it today Latin, because we need to
be having these discussions rather than just walk into it
without any discussion whatsoever. I mean, we've got a younger
generation now, We've got a generation or two that have

(01:11:03):
grown up with the Internet and don't remember the world
you and I remember where there's Now we've now got
a generation that just accepts that they go to chet
GPT to either do this school we say, or to
ask a question about life. And so we're heading down
that path. There is no stopping it. But should we

(01:11:24):
actually be pausing, as I keep saying, and asking questions
about whether we should you reframe some of what's going
on or or make sure that we don't head down
certain roads which which could be disastrous.

Speaker 2 (01:11:41):
Well, justin you triggered something that I that I was
going to include, and then when you said it, I
thought I'd forgotten about it. So I'm glad you did.
Thank you, And that is you mentioned aliens. And the
question is is it possible now I've asked this question
in one way or another previously on the on My
podcast of people who are including a professor from Melbourne

(01:12:05):
Tech who co wrote a book in a twenty twenty
nineteen I think, and his answer was in the negative.
But is it possible that AI could be hijacked by
some alien force? And when I say sorry, when I
say alien, I mean you can. You can start local
if you want, go wherever you like. I don't mean

(01:12:25):
just from X from external.

Speaker 4 (01:12:29):
Well, I think we could just say there's a spectrum
on that, and I think the answer for me is yes,
es es. Along that spectrum, the answer at one end
is that alien in the sense of a foreign agencies
or foreign forces, this will happen. AI is hackable. AI
will be hackable. And so, going back to our twenty

(01:12:51):
thirty future cast, we will definitely have situations where, unfortunately,
there will be incidences where AI have been forced to
do things or hacked in some way. And at the
other end of the spectrum, I'm the big believer there
are aliens out there. So if aliens do turn up, well,
the reality is yes, the answer is if they could

(01:13:12):
turn up halfway across the galaxy to us, they can
definitely hack our AI.

Speaker 2 (01:13:17):
So you reckon that some of those more recent sightings
in America maybe for real.

Speaker 4 (01:13:25):
Well I or yeah, I mean, I don't know. I'm
just a big believer that I don't There's a thing
called the fer Ferm a paradox, Ferm Fernie paradox, something
like that, and it basically says that from this this
thinker was basically saying, well, if we've got all this
galaxy out there, how can we haven't seen aliens? Because

(01:13:48):
it doesn't quite make sense. Look, I think in one
of the answers to that is court it's called the
zoo answer. I believe aliens are out there, and I
think that we're just being We've just been kept in
a zoo. They don't want us to know. So, yeah,
they're going to turn up at some point.

Speaker 2 (01:14:08):
Interesting Nigel, Well.

Speaker 4 (01:14:10):
I'm not sure.

Speaker 3 (01:14:12):
I've got a very open mind about it, but I
agree with you that definitely the danger is that the
systems will be hijacked by potentially what are called bad actors.
Whether they were already seeing the rapid advances of AI

(01:14:33):
being weaponized by cyber criminals, very highly sophisticated scams. You
mentioned the deep fake technology, but they are they certainly
with frightening accuracy, impersonating voices, they're creating fake speeches, They're
definitely falling people into handing over money as well from

(01:14:56):
their bank account. So we're really seeing very sophisticated scams
using AI. So if we're now talking about the huge
developments over the next five years, the potential for bad force,
whether they be political forces or whether they be criminals
or whoever they are, the potential is that the world

(01:15:19):
is going to change. And again, if we go back
to the early days of the Internet, Layton, I mean,
we never quite imagined that the early developments of Google
and Facebook would have such an enormous impact on everything
from misinformation to affecting young kids and so forth. So

(01:15:42):
that we are definitely heading to a place where whoever
can control what we end up using the AI.

Speaker 4 (01:15:54):
It's got potential problems. I will just add Laton, don't
count us out. Independence date the movie they hacked the Aliens,
so don't count us out. We we might have.

Speaker 2 (01:16:08):
No I don't want to count us out. All right, guys,
do a sales spitch for me.

Speaker 4 (01:16:15):
On Creative mckinna's or yeah, well, creative mckinna's is just
our way of trying to keep the dialogue going. We
wanted to have a place where we could talk about
all the issues are unfiltered. Being honest, we are not sycophants,

(01:16:35):
nor are we doomers. We're all about exploring it in
a kind of quite journalistic way to actually ask the
hard questions and explore basically all these issues we've been
exploring with you, and importantly having that narrative and that
dialogue out there so that people can have a way
to understand these the different aspects because ais are very

(01:16:58):
broad field, and we want to have a way of
exploring that with people so that they can have a
sense of it and quite importantly they can go often
ask their own questions.

Speaker 2 (01:17:09):
So how do you find it?

Speaker 3 (01:17:11):
It's on substack. So substack is very popular at the
moment as a kind of a blog type site. If
you go to substack and search for either of our names,
then you'll definitely find it that way.

Speaker 2 (01:17:26):
Justin Matthews and Nigel Horrocks. Of course, now I've got
to congratulate you both because this is the first time
that I've done in seven years, in two hundred and
eighty six podcasts, just being the tow eighty sixth. The
first time I've done a double header and it's gone
pretty damn well. I reckon.

Speaker 3 (01:17:46):
Well, thank you again for having us so lat and
thank you again for what you're doing as well as
getting all sorts of great information out there. You've got
a great podcast, so well done.

Speaker 4 (01:17:56):
Yeah, we appreciate it.

Speaker 2 (01:17:59):
It's been a pleasure, Nigel. I could use a news
editor keep me on the straight and narrow. So I reckon,
I reckon six months more and we'll do another one.

Speaker 3 (01:18:12):
That'd be great lake and we'll look forward.

Speaker 4 (01:18:14):
Yeah, that'd be fantastic.

Speaker 2 (01:18:15):
We'll do it.

Speaker 4 (01:18:16):
Thank you so much.

Speaker 2 (01:18:17):
Thank you both. Now, Missinus producer. Here we are in
the mail room for podcast number two hundred and eighty six.

Speaker 4 (01:18:42):
How are you late?

Speaker 2 (01:18:43):
I thought you'd never ask.

Speaker 5 (01:18:45):
Do you think it might be a good idea to
have at least an hour of the day where you might,
you know, go for a walk on the beach or something. Literally,
you have been in this room the entire week. That's
a little slap down. We want we want work life
balance here.

Speaker 2 (01:19:01):
I'm a prisoner and I don't know your own prison,
and I don't go walking in this weather. Now, what
you got.

Speaker 5 (01:19:08):
He's going to carry on doing this till these ninety
nine folks.

Speaker 4 (01:19:11):
Hope you're ready for it.

Speaker 2 (01:19:12):
Don't think so late.

Speaker 5 (01:19:14):
And Andrew says, firstly, congratulations on a fascinating podcast, bringing
in people and perspectives you would otherwise not get to hear.
I have listened to every podcast at least twice, and
some more than twice. I spend a lot of time
operating machinery, so get plenty of time to myself. The
podcast fulfills the need to stimulate my brain while I'm working.

(01:19:36):
I have particularly enjoyed Robert McCulloch, and the silencing of
him shows the corruption of New Zealand and how people
in positions of power are threatened by those who may
be considered smarter than they are. Instead of bringing them
on board and utilizing their experience, they are ostracized and excommunicated.
Going to show that the big end of town is

(01:19:58):
nothing more than an incestuous club. Please consider getting mister
McCulloch back on, although I don't blame him if he
is hesitant. And then Andrew says, so another podcast he's
listened to several times, the two John Olcock podcasts. John
mentioned the current money system had about eighteen months to
two years to run. From memory, we are now more

(01:20:19):
than twelve months on from this prediction. Would you please
consider getting John back to give his opinion on where
we are currently at. And then Andrew goes on to
say that he's a big fan of crypto and he's
having a look seeing how that's going. He says, love
the mailroom and the banter between the both of you.
Please continue with your interesting and informative interviews.

Speaker 2 (01:20:41):
That's from Andrew Andrew, thank you the banter the best
part gets cleaned out now from who we got from John?
Thank you for the podcast with Jodie running from PSGR.
Jody covers the high level governance and process problems with
the bill, but there is also what it means at

(01:21:03):
a practical level for farmers, consumers and exports. The issue
is that the bill ends regulation and traceability of gene
editor GMOs, so there is no visibility in the food chain.
The social license for ge has been based on the
precautionary principle and the right to choose these are to

(01:21:23):
be taken away in the bill. I would welcome the
chance to discuss the issues with you on a podcast
says John consumer researcher and advocate spokesman ge Free, New Zealand.
John appreciate it.

Speaker 5 (01:21:37):
Leyton Morris says, thank you for your excellent podcasts. I
have just listened to your interview with Dr Corey. Even
if the comments by doctor Corey cannot be fully verified,
his views and evidence, together with other medical practitioners that
can collaborate his research with supporting data, should be included
in the current review on COVID nineteen. Further than that,

(01:21:58):
the corruption by Big Farmer is incredible and for me
almost unbelievable, and should be widely disseminated through medical journals
and the media. Big Farmer and their Lackey's actions are
not just unethical but criminal. Unfortunately I cannot see them
being brought to justice. People like doctor Bloomfield and Jacinda

(01:22:19):
Ardurn should be required to respond to his assertions and
not simply respond that they were simply repeating what who
and medical experts were saying. Surely people in power, such
as heads of government, not just in New Zealand, owe
it to their public to provide alternative evidence of the
information being given as considered controversial. Honesty should be a

(01:22:41):
hallmark of public officials, doctors and scientists. But I'm cynical
enough to know that that will never happen. Hence people
like you Leyton are so important in the media. Keep
up the good work. That's from Morris, the very flattering Morris.
I try and not succumb to flattery. But you are
correct in what you say as regard the individuals who

(01:23:05):
contributed to the vice of what happened over that period.
I'm amazed, pleasantly so with the number of people who
communicate with me and who I talk with who say,
and you've heard some of them on interviews say that
these people should be charged, and plenty often go on

(01:23:27):
and saying they should be behind bars. My comment is it
will never happen. And somewhere buried in these emails there
is a very short one. I don't think I can
lay my hands on it straight away from Christine, thank
you for your great interviews today with Brian Leyland and
miss running. Informative but a bit depressing realizing my thoughts

(01:23:51):
are actually true. Read Donald Trump. Do you think that
due to the price of gold they might open up
Fort Knox and revalue the gold that's hopefully still there,
Hopefully it's currently only valued at forty two dollars an ounce,
I believe, and go back to the gold standard. It
were just about payoff America's debt, I imagine. Regards to you,

(01:24:14):
kind regards to you both, Chris. This question about whether
the gold is still or all the gold is still
in Fort Knox is rolling on and on. I thought
Trump was going down there to have a look at it,
but then there was something I read that says that
not even he has the right to go into it
if they don't want him to. I don't know the answer,

(01:24:36):
but I want to see the confrontation. Lady John says,
during the latest podcast two eight five and previously two
seven six, you've talked about our electricity supply and looming shortages.
No one is talking about the power required to run
the massive data centers that are being built in Hobsonville
and elsewhere. My guess is that we will all be

(01:24:58):
subsidizing them and our power bills will go up again.

Speaker 4 (01:25:02):
How does this help New Zealand?

Speaker 5 (01:25:04):
I hope you get the opportunity to ask this question
next time you have a guest to discussing electricity supply.

Speaker 4 (01:25:10):
John.

Speaker 2 (01:25:11):
Thank you, John, thank you for from Paul. This is
not the one that I just referred to, but it's
along the same lines. Thanks for another great thought provoking
a sobering show, particularly the insights delivered by Jay Brunning.
The actions of Judith and Jacinda before her had me

(01:25:31):
thinking of Thomas Soule's quote, It's hard to imagine a
more stupid or more dangerous way of making decisions than
by putting those decisions in the hands of people who
pay no price for being wrong. We are going to
miss Thomas Sole when he's gone, but I've got one
full shelf of books of his, so we'll never forget Lton.

Speaker 4 (01:25:53):
Chris says.

Speaker 5 (01:25:54):
I'm a long time listener and discovered your radio show
in twenty fourteen when I arrived in New Zealand. I
find your podcast is full of grown up discussions about
important topics, and always value your opinion and insight. I
always enjoy your mailroom segments, but sometimes wish a younger
contributor would write in. In an effort to take this

(01:26:16):
in hand, I have put together my middle aged musings
in a hope that you find them interesting. I've tried
to make it interesting and hang together, so I hope
it's not too tedious. In any case, it has helped
me put my middle aged musings into print, so at
least I can refer to it in the future and
tell my children I told you so. Best wishes, late
and you're an absolute legend. And he can't be that

(01:26:39):
old latent because he says a musing of a middle
aged millennial in his document, which I gather you've printed
out and read.

Speaker 2 (01:26:47):
No, I haven't read it. Where would I get time
to read this? Oh, my goodness the moment. It's eight
pages six, it's not it's five three four five, it's
five pages, but very very small type. But let me
give you an idea, because I will read it. Musings
of a middle aged millennial is opening, then doctors and education,

(01:27:11):
followed by houses, followed by children, followed by pensions and investments,
and finally, which is probably the best one of all,
hope as an hope. So it will get consumed, believe me.
But I'm going to reprint it in a larger type.

Speaker 5 (01:27:30):
Thank you for all that effort, Chris.

Speaker 2 (01:27:33):
I reckon now from Ian great interview with Pierre highlights sadly,
how so many folks still have no idea what this
product can do for them if you aren't already. He
is a very good rundown on the latest attempt by
this hapless net party to get us corded into digital id.

(01:27:56):
Battle after battle after battle. It seems ian short and
sweet lad.

Speaker 5 (01:28:03):
And I've just picked up Chris's page one, and I'll
just read you the first couple of paragraphs. It's really
cute usings of a middle aged millennial. Would you believe
that it's been eight years since the newspapers were telling
younger me millennials to cut back on the AVO toasts
and four dollar coffees. Since then, some of us older
millennials have reached our early forties. We've climbed the hill

(01:28:25):
of youth and reached the uplands of middle aged. And I,
for one, have taken a sharp intake of breath at
the view that surrounds us. Just as the excitement of
brunch is being replaced by the appeal of an afternoon snooze.

Speaker 4 (01:28:38):
I'd like to take you.

Speaker 5 (01:28:39):
Through some usings of a mind that's old enough now
to be part of the problem, with the experience to
look for solutions and with the energy to make them happen.
Just one more paragraph. Let's start by setting the mood.
This isn't going to be a ripping critique of how
the boom has left us with scraps or how unfair
it is to be a millennial. I've done most things right.

(01:29:00):
I'm married with teenage children, I'm self employed and earn
enough to own a home as a family. We've been
hard working and lucky, probably an equal measure. However, there
are certain choices where I have been a contrarian, where
I've missed the trip wire that caught out a lot
of my fellow millennials. On the other hand, there have
been missed opportunities where a home in Auckland and a

(01:29:22):
proper career might have eventuated. Middle age is the time
to take stock and point out where we might be
going wrong and what tactics have definitely paid off. So
he says, let's.

Speaker 2 (01:29:34):
Begin, Well, let's begin. Let's begin with some facts. Though
you're not middle aged. Early forties, he said, early forties
has not middle aged anymore. It's been replaced by the
early sixties, and some people think even the seventies. So
you've still got a little way to go. But after
today's interview, well after the day this discussion on AI.

(01:29:59):
I wonder what effect that might have had on you
when you were listening to it as to where your
life's sat and what's coming at you. Whether I'm a
feared or really happy about something, I'm always interested in
the result, whatever it might be. So from David, this

(01:30:20):
is a report on the latest US saidate hearing into
COVID jabs. They say Big Farmer can't be sued in
the US, so what about in New Zealand? It is
really quite shocking that Pfizer was able to sell their
deadly concoction to the New Zealand government without any legal consequences. Similarly,
why haven't government and health officials in New Zealand faced

(01:30:42):
legal consequences for the atrocity they inflicted on the people
of New Zealand. This is not the letter either I
was referring to earlier. So there were three of them
if you include the one that I can't spot. Regards David. David,
there's a lot of a lot of answers to the
questions that you ask, but it'll never happen. That's the point,

(01:31:06):
and the point is that we don't have the ball
in this country to do it. Nobody does be useless
in the greatest scheme of things. This is producer, Thank you.

Speaker 4 (01:31:16):
Thank you, lighton. You'm done and dusted. I'm downe and dusted.

Speaker 2 (01:31:20):
Shall we see you next week?

Speaker 5 (01:31:22):
I think there's a very high probability.

Speaker 2 (01:31:25):
The question is will there be next week?

Speaker 4 (01:31:44):
No.

Speaker 2 (01:31:44):
I was going to leave the podcast at the end
of the mail room for this week because we covered
an awful lot of ground. But something has caught my
attention that I thought might be of interest and useful
to wind it up with. Keep in mind this is
from an American author, an American source. AI is replacing

(01:32:06):
human workers. How companies using automation to cut jobs, so
a little information across nearly every industry. Artificial intelligence is
no longer just a tool. It's becoming the worker. From
Wall Street to Silicon Valley, major companies are laying off
employees and replacing them with AI systems that promise greater efficiency,

(01:32:29):
higher profits, and streamlined operations. But as corporate giants boast
about record breaking productivity, a growing number of employees are
waking up to a grim reality their jobs may be
next A World Economic Forum report published in early twenty
twenty five found that forty one percent of employers globally

(01:32:51):
planned to reduce their workforce by twenty thirty due to
AI automation. This is not some distant predictions already underway.
The kind of jobs being replaced. AI is hitting hardest
where jobs involve repetitive and routine tasks, both physical and cognitive.
According to the WEF's Future of Jobs Report, these roles

(01:33:14):
are seeing rapid decline postal service clerks, executive assistance, payroll
and benefits clerks, customer service agents, mid level software engineers,
human resources personnel would they be missed? And graphic designers.
Even in traditionally human centered roles, companies are finding ways

(01:33:36):
to use AI tools to automate communication, design and support. Microsoft,
IBM and others lead the AI layoff charge. Microsoft recently
announced it would lay off six thousand employees, around three
percent of its global workforce. The company said the layoffs
were part of an effort to boost efficiency and reduce bureaucracy.

(01:34:00):
At the same time, Microsoft is investing heavily in AI
and new technology platforms. IBM replaced hundreds of human resource
employees with its in house AI which is called ASKHR,
which now handles ninety four percent of routine HR tasks
like managing vacation requests and pay statements. CEO Arvand Krishna

(01:34:25):
said the layoffs made room for hiring more programmers and
critical thinking roles. Salesforce cut one thousand jobs earlier this
year to make way for its new AI product Agent Force.
CEO Mark Benioff emphasized that using AI is now a
fundamental expectation for all employees. CrowdStrike, a major cybersecurity firm,

(01:34:51):
laid off five hundred workers as it ramped up AI deployment.
CEO George Kertz called AI a force multiplier, explaining that
it allows the company to innovate faster, reduce hiring, and
improve customer outcomes. Even Wall Street and Safe, City Group
and JP Morgan are among banks projecting that AI could

(01:35:12):
lead to you ready two hundred thousand layoffs in the
next five years, particularly in middle office and operational roles.
The City Group noted that more than half of banking
jobs are highly susceptible to automation. Now what the numbers say?
Fifty four percent of tech hiring managers expect layoffs due

(01:35:33):
to AI within the next year. Forty five percent believe
employees in AI replaceable roles are most at risk. Seventy
seven percent of large companies plan to reskill employees to
work with AI. Sixty nine percent say AI will create
demand for entirely new roles. Thirty two percent of American

(01:35:56):
workers believe AI will reduce job opportunities, and only sixteen
percent say they currently use AI in their own work.
Can you bulletproof your career despite the job cuts? AI
is not just replacing people, it's also creating new opportunities.
But staying employed will require adaptability and new skills. So

(01:36:20):
here are ways to future proof your career. Firstly, learn
AI basics now. Free courses from Corsera, Navidia, and IBM
can teach you how AI works and how to use it.
Hiring managers now routinely ask how familiar are you with AI?
That makes me chuckle thinking back to at the end

(01:36:42):
of the discussion and talking about relationships with AI things,
how familiar are you with AI? It's got two meanings anyway.
Number two upskill in demand areas. According to surveys, the
most desirable skills include AI developments and cybersecurity, a data analysis,

(01:37:06):
strategic thinking at adaptibils, shift to human centered roles. Jobs
that involve critical thinking, creativity and direct human interaction are
harder to automate. These include leadership, product development, sales strategy,
and high level consulting. Going back to that first one

(01:37:29):
of critical thinking, you know, it's something that we've talked
a lot about on the podcasts, as though hasn't been
talked much about in other areas, including of course, education,
and it seems that critical thinking has been sort of
left on the byway by many folk. Now is the
time to resurrect a big time number. Four. Companies like

(01:37:55):
Shopify and Fiber have made it clear if you can't
prove you're doing something that AI can't. If you can't
prove you're doing something AI can't, you may not be
hired or retained. Lifelong learning is the new job security,
and five stay visible and valuable a warning for the future.

(01:38:16):
Many company leaders, like JP Morgan's Jamie Diamond and IBM's Krishna,
say that AI will augment rather than eliminate jobs, but
the early results suggest otherwise. At least for now, AI
is being used primarily to cut costs and raise profits,
not preserve jobs. For many workers, this feels less like

(01:38:38):
a tech revolution and more like a hostile takeover, as
the Pew Research Center found more than half of American
workers are worried about AI's long term impact. That fear
is not misplaced, but the outcome is not set in stone.
Those who adapt, learn and stay ahead of the curve

(01:38:58):
may not just survive the AI revolution, they might thrive
in it. For everyone else, the time to prepare is now.
And that, by the way, if if you want to
look for it, it's from finance and money. AI is
replacing human workers, and that's where we shall leave it now.
If you'd like to write to us Latent at newstalksv

(01:39:20):
dot co dot nz or Carolyn at news Talks dot
co dot nz, and we shall record your mail and
most likely utilize it. And the only thing left to
say is, as always, thank you so much for listening,
and we shall talk soon.

Speaker 1 (01:39:44):
Thank you for more from News Talks b Listen live
on air or online, and keep our shows with you
wherever you go with our podcast on iHeartRadio.
Advertise With Us

Popular Podcasts

Stuff You Should Know
24/7 News: The Latest

24/7 News: The Latest

The latest news in 4 minutes updated every hour, every day.

Dateline NBC

Dateline NBC

Current and classic episodes, featuring compelling true-crime mysteries, powerful documentaries and in-depth investigations. Follow now to get the latest episodes of Dateline NBC completely free, or subscribe to Dateline Premium for ad-free listening and exclusive bonus content: DatelinePremium.com

Music, radio and podcasts, all free. Listen online or download the iHeart App.

Connect

© 2025 iHeartMedia, Inc.