Episode Transcript
Available transcripts are automatically generated. Complete accuracy is not guaranteed.
Speaker 1 (00:09):
You're listening to a podcast from News Talks. It'd be
follow this and our wide range of podcasts now on iHeartRadio.
Speaker 2 (00:16):
Now, let's talk about AI. Do you sometimes look at
a picture or a video and wonder, hmm, is this
real or is this AI? How much of the fear
around AI is founded? Terresa Payton is a former White
House Chief Information Officer. She's in the country for Sparks
Tech Summit underweight at the moment. Let's talk to her. Hi, Teresa, Hi,
how are you well? Thank you? Do you think we
will ever get to a world where we will look
(00:38):
at a picture and not know if it's AI or not?
Speaker 3 (00:40):
We're there now? Oh, I can tell the difference, so
can't you most of the time. But sometimes sometime I
have to really take a look at it a little closer.
So we're getting closer and closer to the day you
won't be able.
Speaker 2 (00:53):
To take okay. So but the experts at the moment
can stop like, okay, some of its normal people can
see through most of it. The experts can still see through.
Will we ever get to a day where I will
put it in front of you and you'll go, I
actually don't know, yes, how far away this year. What
does that say about our ability to prove things?
Speaker 3 (01:12):
Well, I mean, here's the thing. Trust it's one of
our most valuable assets, and it's now one of our
most vulnerable assets. You can't even trust your own eyes.
But the technology is there where what we can do
in the very near future. The question is which one's
going to win out? Is like, for example, you could
(01:33):
tell whether or not our voice is the two of
us talking or are you actually talking to a voice
clone of me? So there is technology now that could
say this is of human origin or this is of
computer origin.
Speaker 2 (01:45):
Do you think there will always be technology that we'll
be able to tell us the truth?
Speaker 3 (01:49):
Yes, But the question is will it be implemented into
the process fast enough before we get duped by fraudstors
and criminals.
Speaker 2 (01:58):
And will it be accessible enough because the problem is obviously,
as a member of the media, we often rely on photographs, videos,
audio documents to say the thing that we are alleging
is true because here's the proof, right, Yes, So if
AI is able to kind of confound that and I
can't use that anymore, well, I always be able to
(02:18):
rely on the technology to back me up and go No,
that's really the truth.
Speaker 3 (02:21):
You should be able to the other thing to think
about too, is we can watermark. So for example, this
conversation you and I are having, the radio station could
watermark that conversation, so if somebody tries to meddle with it,
the AI would know. So it like, in my mind's
what has to happen is all the tech product companies
when they edit something, it needs to say what you're
(02:44):
about to hear has been produced by general Taveika. It's
like there needs to be a disclaimer, like there is
things that are unhealthy for only.
Speaker 2 (02:51):
The good guys are going to do it though the
bad guys won't. But you can see how, you can
see how, you know, Yeah, sure we can prove and disprove,
but there could just be this proliferation of stuff that
is untrustworthy. And we're kind of on our own, don't we. Yeah,
we little we somewhat are on our own. You know.
Speaker 3 (03:09):
I say, we used to be in a in a
place where it was trust but verify, and now I
say never trust, always verify, Verify, verify, and verify one
more time.
Speaker 2 (03:18):
Yeah, okay, so what do you worry about what seems
to be the worst case scenario with AI, which is
that we lose control.
Speaker 3 (03:27):
Yes, do you really I really do worry about that.
Speaker 2 (03:29):
Okay, how far away is that if it happens.
Speaker 3 (03:32):
Well, I think there's a lot of really smart people
around the world, New Zealand included, who are having really
hard conversations around governance and guardrails for AI. So my
hope is those hard conversations will turn into governance and
guardrails before we hit this. But I do think twenty
twenty seven, twenty twenty eight, if we don't get this
(03:52):
right now, this isn't you know how we put things
off with social media it's still a little bit of
a dumpster fire sometimes. Yeah, if we don't get this right,
this is different.
Speaker 2 (04:02):
And what happens if we lose control? What does AYI do?
Speaker 3 (04:05):
Well? For starters, it's a huge enter g hog. So
if you love the planet, it can run infinitely and
tell itself to keep running. We've already seen in labs
where researchers who are you know, kind of like your
ethical hackers, they try to see if they can trick
the generative AI into creating kill switches by telling it
(04:28):
don't create a kill switch for yourself or don't create
a an override to the kill switch, and they see
where it basically becomes self preserving and does it and
it tries to create something where you can't turn it off.
Speaker 2 (04:40):
Yeah, yeah, yeah, does it succeed?
Speaker 3 (04:43):
It will in some of these lab in some of
these lab cases in limited areas, yes.
Speaker 2 (04:49):
Can we not override it? Then? Will we not always
have an override function?
Speaker 3 (04:54):
The question is as will you have engineers who really
know how it works?
Speaker 2 (04:57):
Why don't you just go to the wall and pull
it out?
Speaker 3 (04:59):
I mean that's yeah, ideally, right, just pull it out,
sort of like the movie Airplane and he unplugged the runway.
Speaker 2 (05:05):
But I'm serious. Is that always going to be an
option that we It may not be.
Speaker 3 (05:09):
It may not be because here's the thing. If you
cut the power to the mainframe, you don't know if
it already proliferated itself to someplace else.
Speaker 2 (05:17):
Yeah. You haven't just watched too many movies, have you, Teresa?
Speaker 3 (05:19):
No, I don't have time for movies. It's all in
my head.
Speaker 2 (05:22):
What is the thing that you almost worried about?
Speaker 3 (05:26):
I worry about losing human essence as part of sort
of the story. And so, for example, I've watched side
by sides of the same person interacting with a customer
service agent. You can hear their voice and you can
hear the human essence and the interaction between the two.
And then they opted in the next phone call to
(05:48):
talk to a customer service bot because they didn't have
to wait if they talked to the bot, and they
started responding like the robot like without the essence like
it was like kind of rude and short of perfunctory.
So if we spend more time of our day interacting
with customer service chatbots instead of each other, we're going
to start to lose that because that's must memory for
(06:09):
us to be polite and to be nice. And so
if your muscle memory becomes just do the task and
have no emotion, I worry about us losing our human essence.
Speaker 2 (06:19):
Yeah, And isn't it also? I mean, you make me
think we have such a problem with loneliness, right, you
don't like the interactions that you had maybe one hundred
years ago. You go to a supermarket, get the kids,
go to the kids' school, interact with the teachers, all
that stuff that you would do in your day. We've
lost so much of that and doesn't that just actually
doesn't AI have the ability here to actually just make
(06:39):
that worse.
Speaker 3 (06:40):
It does, And so we're seeing so you know, it's
sort of like two sides of the same coin. So
on one side, for somebody who is lonely, it would
be nice for them to have an outlet or to
be able to game. Maybe they're very socially awkward, and
so maybe they can kind of like use it as
a coach to help them get their courage up to
(07:00):
leave the house and go to a party, for example.
But what we're seeing is that because of the way
these chap or created is they want you to come
back for more. So if they're basically designed to give
you more of what you came for, which means addictive properties,
and if it's addictive that way, then you'll find And
(07:21):
there was a story in the Wall Street Journal that
somebody said, look, I'm an extrovert, and I became more
introverted the more I talk to my chatbot. Interesting, so
it's addictive and the person like literally had to have
somebody in their life say I think you spend too
much time just your phone talking to a bot.
Speaker 2 (07:39):
I'm just like that movie she isn't it? Yes, Okay,
what do you use it for in a good way?
Speaker 3 (07:45):
Oh, there's so many amazing ways. So I'm trying to
learn Italian, and so I have a chatbot that I
use to kind of quiz me on my Italian flash cards,
so that could be really helpful. I actually tell people
instead of like asking it just to like summarize something.
Sometimes I'll say, if you were Bob Iger, how would
(08:05):
you read this article and how would you summarize it?
Or if I'm trying to bring storm, you know, I
run a company. I've got thirty employees, and sometimes I'm
trying to brainstorm on a different way to present our
services to clients. And so you can kind of go
in this roleplay mode and do that. So there's a
lot of really positive uses for it.
Speaker 2 (08:24):
Yeah. Hey, it's been very nice to talk to you.
Thanks for chatting to us.
Speaker 3 (08:27):
It's been amazing to be with you here in studio.
A great to meet you.
Speaker 2 (08:29):
Yeah, go well. Teresa Payton, CEO of Fertilized Solutions that
of course form a white House Chief Information officeer.
Speaker 1 (08:36):
For more from us talks ed b listen live on
air or online and keep our shows with you wherever
you go with our podcasts on iHeartRadio,