All Episodes

November 11, 2025 • 8 mins

Artificial intelligence is growing and changing and reshaping industries, but there's growing concerns about the harm it could bring.

Trust is a valuable asset in the digital space, and many businesses are concerned about how the online experience could change - and how it could impact their output.

Former White House CIO and Fortalice Solutions CEO Theresa Payton says it's going to get harder for people to distinguish the difference between AI and real content - and the race is on to establish proper safeguards.

LISTEN ABOVE

See omnystudio.com/listener for privacy information.

Mark as Played
Transcript

Episode Transcript

Available transcripts are automatically generated. Complete accuracy is not guaranteed.
Speaker 1 (00:00):
Now let's talk about AI. Do you sometimes look at

(00:02):
a picture or a video and wonder, Hm, is this real?
Or is this AI? How much of the fear around
AI is founded? Terresa Peyton is a former White House
Chief Information Officer. She's in the country for Sparks Tech
Summit underweight at the moment. Let's talk to her.

Speaker 2 (00:15):
Hi, Teresa, Hi, how are you well? Thank you?

Speaker 1 (00:18):
Do you think we will ever get to a world
where we will look at a picture and not know
if it's AI or not?

Speaker 2 (00:24):
Where there now?

Speaker 3 (00:25):
Oh? I can tell the difference, so can't you most
of the time. But sometimes some time I have to
really take a look at it a little closer. So
we're getting closer and closer to the day you won't
be able to.

Speaker 1 (00:37):
Take okay, So but the experts at the moment can
stop like, okay, some of its normal people can see
through most of it. The experts can still see through.
Will we ever get to a day where I will
put it in front of you and you'll go, I
actually don't know. Yes, how far away this year? What
does that say about our ability to prove things?

Speaker 2 (00:56):
Well?

Speaker 3 (00:56):
I mean, here's the thing, trust, It's one of our
most valuable ass and it's now one of our most
vulnerable assets. You can't even trust your own eyes. But
the technology is there where what we can do in
the very near future. The question is which one's going
to win out? Is like, for example, you could tell

(01:17):
whether or not our voice is the two of us
talking or are you actually talking to a voice clone
of me? So there is technology now that could say
this is of human origin or this is of computer origan.

Speaker 1 (01:29):
So do you think there will always be technology that
will be able to tell us the truth?

Speaker 3 (01:33):
Yes, but the question is will it be implemented into
the process fast enough before we get duped by fraudstors
and criminals.

Speaker 1 (01:42):
And will it be accessible enough because the problem is obviously,
as a member of the media, we often rely on photographs, videos,
audio documents to say the thing that we are alleging
is true because here's the proof, right, Yes, So if
AI is able to kind of confound that and I
can't use that anymore, well, I always be able to

(02:02):
rely on the technology to back me up and go, no,
that is really the truth.

Speaker 3 (02:05):
You should be able to the other thing to think about, too,
is we can watermark. So for example, this conversation you
and I are having. The radio station could watermark that conversation,
so if somebody tries to meddle with it, the AI
would know. So it like, in my mind what has
to happen is all the tech product companies when they

(02:25):
edit something, it needs to say what you're about to
hear has been produced by generative AI. It's like there
needs to be a disclaimer, like there is things that
are unhealthy for.

Speaker 1 (02:34):
Only the good guys are going to do it, though
the bad guys won't. But you can see how you
can see how you know, Yeah, sure we can prove
and disprove, but there could just be this proliferation of
stuff that is untrustworthy and we're kind.

Speaker 2 (02:47):
Of on our own, don't we.

Speaker 3 (02:49):
Yeah, well, we somewhat are on our own to you know,
I say, we used to be in a place where
it was trust but verify, and now I say never trust,
always verify a verify, Verify and verify one more time.

Speaker 1 (03:02):
Yeah, okay, So what do you worry about what seems
to be the worst case scenario with AI, which is
that we lose control?

Speaker 2 (03:10):
Yes?

Speaker 1 (03:11):
Do you really?

Speaker 2 (03:12):
I really do worry about that? Okay?

Speaker 1 (03:13):
How far away is that if it happens.

Speaker 3 (03:15):
Well, I think there's a lot of really smart people
around the world, New Zealand included, who are having really
hard conversations around governance and guardrails for AI. So my
hope is those hard conversations will turn into governance and
guardrails before we hit this. But I do think twenty
twenty seven, twenty twenty eight, if we don't get this

(03:35):
right now, this isn't you know how we put things
off with social media It's still a little bit of
a dumpster fire sometimes. Yeah, if we don't get this right,
this is different.

Speaker 1 (03:45):
And what happens if we lose control? What does say
I do well?

Speaker 2 (03:49):
For starters, it's a huge energy hug.

Speaker 3 (03:52):
So if you love the planet, it can run infinitely
and tell itself to keep running. We've already and labs
where researchers who are you know, kind of like your
ethical hackers. They try to see if they can trick
the generative AI into creating kill switches by telling it,
don't create a kill switch for yourself, or don't create

(04:14):
a an override to the kill switch, and they see
where it basically becomes self preserving and does it and
it tries to create something where you can't turn it off.

Speaker 2 (04:24):
Yeah, yeah, yeah, does it succeed?

Speaker 3 (04:26):
It will in some of these lab in some of
these lab cases in limited areas, yes.

Speaker 1 (04:32):
Can we not override it? Then? Will we not always
have an override function?

Speaker 2 (04:37):
The question is as will you have engineers who really
know how it works?

Speaker 1 (04:40):
Why don't you just go to the wall and pull
it out?

Speaker 3 (04:43):
I mean that's yeah, ideally, right, just pull it out,
sort of like the movie Airplane and he unplugged the
runway case.

Speaker 1 (04:49):
But I'm serious. Is that always going to be an
option that we could not be?

Speaker 3 (04:52):
It may not be because here's the thing. If you
cut the power to the mainframe, you don't know if
it already proliferated itself to someplace else.

Speaker 2 (05:00):
Yeah.

Speaker 1 (05:01):
You haven't just watched too many movies, have you, Teresa?

Speaker 2 (05:03):
No, I don't have time for movies. It's all in
my head.

Speaker 1 (05:06):
What is the thing that you almost worried about?

Speaker 2 (05:10):
I worry about.

Speaker 3 (05:13):
Losing human essence as part of sort of the story.
And so, for example, I've watched side by sides of
the same person interacting with a customer service agent.

Speaker 2 (05:24):
You can hear their voice and.

Speaker 3 (05:25):
You can hear the human essence and the interaction between
the two. And then they opted in the next phone
call to talk to a customer service bot because they
didn't have to wait if they talked to the bot
and they started responding like the robot like without the essence,
like it was like kind of rude and.

Speaker 1 (05:41):
Short, perfunctory.

Speaker 3 (05:41):
So if we spend more time of our day interacting
with customer service chatbots instead of each other, we're going
to start to lose that because that's muscle memory for
us to be polite and to be nice. And so
if your muscle memory becomes be just do the task
and have no emotion, I worry about us losing our
human essence.

Speaker 1 (06:02):
Yeah, And isn't it also, I mean, you make me
think we have such a problem with loneliness, right, you
don't like the interactions that you had maybe one hundred
years ago. You go to the supermarket, get the kids,
go to the kids' school, interact with the teachers, all
that stuff that you would do in your day. We've
lost so much of that. And doesn't that just actually
doesn't ali have the ability here to actually just make

(06:22):
that worse?

Speaker 3 (06:23):
It does, And so we're seeing so you know, it's
sort of like two sides of the same coin.

Speaker 2 (06:29):
So on one side, for somebody.

Speaker 3 (06:31):
Who is lonely, it would be nice for them to
have an outlet or to be able to game. Maybe
they're very socially awkward, and so maybe they can kind
of like use it as a coach to help them
get their courage up to leave the house and go
to a party, for example. But what we're seeing is
that because of the way these chat butts are created,

(06:52):
is they want you to come back for more. So
if they're basically designed to give you more of what
you can for means addictive properties, and if it's addictive
that way, then you'll find And there was a story
in the Wall Street Journal that somebody said, look, I'm
an extrovert, and I became more introverted the more I
talk to my chatbot.

Speaker 2 (07:12):
Interesting, so it's addictive.

Speaker 3 (07:14):
And the person like literally had to have somebody in
their life say I think you spend too much time
talking me your phone talking to a bot.

Speaker 1 (07:22):
I'm just like that movie she isn't it? Yes? Okay,
what do you use it for in a good way?

Speaker 2 (07:28):
Oh, there's so many amazing ways.

Speaker 3 (07:29):
So I'm trying to learn Italian and so I have
a chatbot that I use to kind of quiz me
on my Italian flash cards, so that could be really helpful.
I actually tell people instead of like asking it just
to like summarize something. Sometimes I'll say, if you were
Bob Iger, how would you read this article and how
would you summarize it? Or if I'm trying to brainstorm,

(07:52):
you know, I run a company. I've got thirty employees,
and sometimes I'm trying to brainstorm on a different way
to present our services to clients. And so you can
kind of go in this roleplay mode and do that.
So there's a lot of really positive uses for it.

Speaker 1 (08:07):
Yeah, hey, it's been very nice to talk to you.
Thanks for chatting to us.

Speaker 2 (08:10):
It's been an amazing and be with you here in Steadio.
Great to meet you.

Speaker 1 (08:13):
Yeah, go well. Teresa Payton's CEO of Fortaalized Solutions, and
of course former White House Chief Information Office say for
more from hither Duplessy Allen Drive, listen live to news
talks it'd be from four pm weekdays, or follow the
podcast on iHeartRadio.
Advertise With Us

Popular Podcasts

Stuff You Should Know
Paper Ghosts: The Texas Teen Murders

Paper Ghosts: The Texas Teen Murders

Paper Ghosts: The Texas Teen Murders takes you back to 1983, when two teenagers were found murdered, execution-style, on a quiet Texas hill. What followed was decades of rumors, false leads, and a case that law enforcement could never seem to close. Now, veteran investigative journalist M. William Phelps reopens the file — uncovering new witnesses, hidden evidence, and a shocking web of deaths that may all be connected. Over nine gripping episodes, Paper Ghosts: The Texas Teen Murders unravels a story 42 years in the making… and asks the question: who’s really been hiding the truth?

Dateline NBC

Dateline NBC

Current and classic episodes, featuring compelling true-crime mysteries, powerful documentaries and in-depth investigations. Follow now to get the latest episodes of Dateline NBC completely free, or subscribe to Dateline Premium for ad-free listening and exclusive bonus content: DatelinePremium.com

Music, radio and podcasts, all free. Listen online or download the iHeart App.

Connect

© 2025 iHeartMedia, Inc.