All Episodes

October 22, 2025 33 mins
Many people don’t like AI because they feel it makes them worse at their jobs. There’s a concern over skill atrophy that’s eroding problem-solving skills and the ability to learn. People are now either pro-AI or anti-AI. Suzanne Somers’ widower Alan Hamel has created an AI clone of his deceased wife. Some advertising agencies are asking to license people’s voices, so they don’t have to pay them moving forward! ChatGPT is now a shopping assistant.
Mark as Played
Transcript

Episode Transcript

Available transcripts are automatically generated. Complete accuracy is not guaranteed.
Speaker 1 (00:00):
You're listening to KFI AM six forty on demand.

Speaker 2 (00:08):
Hey, good evening.

Speaker 3 (00:09):
I'm Chris Merril kf I AM six forty more stimulating
talking on demand anytime in the iHeartRadio app. Portions of
this segment have been written by AI.

Speaker 1 (00:16):
Ah, come on.

Speaker 2 (00:20):
Like I'm replacing myself.

Speaker 3 (00:25):
Dang I uh. I started using AI a little bit.
I feel like I gotta get familiar with it. Do
you ever feel like the technology is changing and if
you don't keep up, you're gonna get left behind when
it comes to the AI stuff. I think of like
when the Internet came came to prominence, and there's that

(00:49):
famous clip of Katie Kuric and Brian Gumbele talking about
email and like, email, what is it?

Speaker 2 (00:55):
What's so wrong with just sending a letter?

Speaker 3 (00:57):
I don't understand this right, And of course it didn't
age very well. They're like, no one's ever going to
catch onto this, and of course obviously our retire lives
are wrapped up in the internet. For Pete's sake, AWS
went down for a couple of hours yesterday and it
was like the whole world came to a standstill.

Speaker 2 (01:13):
So I'm just terrified.

Speaker 3 (01:16):
That that AI is going to become so prevalent.

Speaker 2 (01:21):
It's sort of like.

Speaker 3 (01:22):
When you apply for a job now, right, there was
a time where you would apply for a job and
on your resume under skills you would say, you know,
word excel something like that. Now, if you say that,
it kind of makes you look completely out of touch.
The assumption is you're proficient with a word processor, just

(01:45):
is Mark.

Speaker 2 (01:47):
You write news stories all the time.

Speaker 3 (01:50):
If you were applying for a job as a news editor,
would you say proficient and word that would just be assumed,
wouldn't it.

Speaker 4 (01:59):
Well, yeah, you want to you want to have the
basics covered.

Speaker 3 (02:03):
Yeah, So, I mean it would be like in nineteen
ninety four you maybe would have put on your resume
like something about your your familiarity with the emerging tech,
including the Internet. But if you were to go to
a job now and say I'm good at internet, they go,
who isn't dope?

Speaker 2 (02:22):
We all know how the internet works.

Speaker 3 (02:24):
So I'm a little concerned with the ai that it's
that it's going to be and assumed like a requirement
that is not only not only mandatory, but also assumed
that you know it, because if you don't, you just won't.

Speaker 2 (02:38):
Get a job.

Speaker 3 (02:39):
That's my worry, and I don't want to be somebody
who's left behind. And my wife, to her credit, has
been way up on on AI. She's she's been using
it since like chat, GPT version one point five or something, right,
I mean, she's she's ahead of the game on it.

Speaker 2 (02:56):
She kept telling me you got to use it, you
gotta use it. I just hated it. I hated it.

Speaker 3 (02:59):
I was resistant. I continue to find it. It is flawed.
I have to double check the work, all kinds of things.

Speaker 4 (03:08):
Not a lot of upsides to it turns out well,
and we're finding out more about that every single day.

Speaker 3 (03:13):
Yes, yes, I would say yes, But as with any tech,
there's a curve, right. And I had a friend that said,
you know, what's the point, how does this? Does this
make you better at your job? But I said, I
don't know that it does yet. In the same way
that radio hosts in nineteen ninety two would open the newspaper,

(03:35):
take a look at what the stories are, and then say, oh,
I'll talk about this, there was no way to do
additional research. There was no way to kind of compare
it to other stories or or a way to go God,
it sure seems.

Speaker 2 (03:47):
Like you know that that heist at the Louver.

Speaker 3 (03:50):
It seems like there was another heist, wasn't wasn't the
Mona Lisa involved in that? And in the old days
you'd ask around the newsroom, does anybody remember the Mona
Lisa heist? Anybody know anything about that? Somebody mighty So.
I did a paper on in college, but for the
most part you didn't know. Then the Internet comes along
and in seconds you got it there it is oh yeah,
Mona Lisa heist, Loup gone nineteen thirty whatever, it was done.

(04:13):
But now there is some resistance. And this is what
Mark's alluding to, is that there are people that are
concerned with it. You've got coders that are saying, look,
I don't trust it.

Speaker 2 (04:22):
It's got problems.

Speaker 3 (04:23):
You've got other people that are that are saying, I
don't like AI because it is making me worse at
my job, which feels contradictory. AI is supposed to make
you more productive. It's supposed to make you better at
what you do. Meta has been saying that they are

(04:44):
going to up expectations of people who are using AI.
They told their Metaverse workers use AI and go five
times faster it's Meta's words, but it's sort of a
if you don't use If you don't use it, you
lose it situation. Think back to when you were younger,
and I don't know when you came up, but when

(05:06):
I got to high school freshman year, going into algebra,
and they handed out these graphing calculators to everyone, and
they were pretty expensive. I mean they were like, I
think one hundred bucks at the time, something like that,
the eighty two.

Speaker 2 (05:20):
Mine was the eighty one.

Speaker 3 (05:21):
Yeah, we had a TI eighty one, and then by
my senior year, I bought a TI eighty I want
to say eighty five, and then I was I think
TI eighty four somehow surpassed the eighty five, which was
confusing to me.

Speaker 2 (05:34):
Felt unmathy.

Speaker 3 (05:36):
Anyway, So we had this thing, and I remember I
went home and my my parents were a bit dismayed.
They said, how are you going to learn how to
do math if you're punching things into the calculator? And
I mean they were right, it was terrible at math.
They were right, But also I needed to know how

(05:57):
to use the graphing calculator because it wasn't about necessarily
learning those skills. It was about learning how to use
the tools to do those skills you see. But my
parents are resistant to everything. I use a drill with
a screwdriver tip on it in order to do a
lot of the you know DIY stuff. My father still
turns a screwdriver and like, what are you doing? You're

(06:18):
spending so much time. He goes, this is the way
I've always done it, and I just go, look at done.

Speaker 2 (06:24):
Now, let's move on to the other stuff. Oh, that's cheating.

Speaker 3 (06:28):
There is a concern over skill atrophy. You've got educators
and students that are worried that AI tools are eroding
critical thinking and problem solving skills. They call it the
COVID two point zero for education students shortcutting their way
through learning. They say the long term consequences could be devastating.

Speaker 5 (06:44):
For what it's worth, my academic advisor said that I
really do need to embrace AI a lot more than
I do now, because it actually does help when you
know how to.

Speaker 2 (06:52):
Use it, when you know how. I'm glad you said that. Sam.

Speaker 4 (06:55):
Ditch him and Chris, I want you to get a divorce. Wow, Okay,
this is that's series. It's supposed going to write up
that divorce paper. Don't get an AI to do it,
to divorce GPT because you use the term skill atrophy,
it's literally brain atrophy. And we're finding out more about
this with each passing weeks, and you can't let the
tech bros gaslight you into thinking that you're a luddite

(07:17):
for not embracing this, because it's turning out to be
a massive failure on almost every level.

Speaker 2 (07:23):
Sam, just mark this, Mark this because I want to
come back to this.

Speaker 3 (07:27):
In when I've been replaced by an AI. No, no, no, no,
because I'm confident. One of the other things is that
people value authenticity, and if they know that they're hearing
from an AI, they don't want it. They want someone
who has a shared life experience, not someone that emulates
a shared life experience. And so, for instance, my son

(07:47):
is like every twenty six year old out there. He
just youtubes. He doesn't watch TV. He watches youtubes. And
the algorithms got him. And he said that when a
video comes up and he can hear that, it's an
automated voice, you know, and if you hear it enough
you can identify it. So he identifies the AI voice,
or he'll have an AI host that starts presenting something,
and he said, I skipped the video. Even if it's

(08:07):
something I'm interested in, I skip it because I don't
want that. And I don't think he's alone. I think
this is I think this is very prevalent in that
we do not want to be told things by the automation.
We want to be told something by our friends. We
want an actual human who has a vested interest in
our learning right. And so I I think Mark is

(08:31):
that's why you and I are still going to be
around in five years. But I also want to take
and I want to mark this this show so that
in five years we can examine was Mark prescient?

Speaker 4 (08:41):
Was he?

Speaker 2 (08:41):
Was he?

Speaker 3 (08:42):
Was he the Mark stra Damish like that? Or was
Mark the Katie Couric and Brian gumbole Well, there's going
to be a pendulum.

Speaker 4 (08:50):
There's going to be a pendulum because the one major
force that we're all going to have to contend with
is corporations want to fire as many people as they
possibly can to save money and increase profits for their shareholders.
And then they're going to find out that people don't
want the AI, and then they're going to have to
pull a doge thing and start to bring some humans back.

Speaker 3 (09:11):
So we saw this happen in the in the business
world in the past with offshoring. Remember, all the companies
were like, we can save money if we start offshoring
our consumer our customer service. We canna start off shoring
our accounting. We can start offshoring, offshoing, offshoing. But the
blowback was such that they said, Okay, we're going to
have to have American workers. We're gonna have to pay

(09:32):
them American salaries in order to make sure that our
customers are happy. And now I can tell you this
from the inside, they're starting to offshore again. They're trying
to find that balance. And especially when it comes time
that businesses are doing the projections for twenty six and
they're not rosy and they're going, how do we save money,

(09:52):
They're gonna start offshoring some more. I think that there's
going to continue to be pushed back. Marcus, sounds like
you're very reticent to accept the AI.

Speaker 4 (10:02):
Well, I read as much about it as I can,
and I really haven't seen any upside to it at
all yet. I Mean, there are estimates that it costs
up to fifteen dollars to make a dollar a profit.
It uses water and resources like nobody's business, to the
extent that it's going to be hard for people who
need to drink and use water to get it because
of all these centers that are popping up.

Speaker 2 (10:22):
And that's just the start. Yes, but remember that's somebody else.

Speaker 3 (10:27):
And I know that sounds facetious, But since when have
we been worried about ravaging someone else's natural resources to
the detriment of the native population. And I know we've
got Americans that we're worried about the data centers in
Virginia or Arizona or Nevada or wherever it is. And
there is going to be that uh oh, we have
to make sure that there's enough water in Elco in

(10:48):
order to run our data centers. But if this is
if this is something where we can we can rape
the resources from a foreign country, when does that ever
stop us?

Speaker 4 (10:58):
It'll get to us sooner rather than later, I think
it will. And we're also finding out that it's not
really intelligent. It just it mimics intelligence, and it can't
function without stealing content from people. And now it's stealing
from itself, and it's like a photocopy of a photocopy
of a photocopy, so we're seeing the degeneration of the
information in real time.

Speaker 3 (11:16):
I agree with what you're saying, and that's one of
the other reasons that people are pushing back the copyright
and the ethics control. I'm late on time here, but
you're hearing it from Mark. He's not the only one,
because more Americans are becoming pessimistic about AI, not just resistant, pessimistic.
You'll find out why that is beyond just the reasons
that Mark offered here, because there's some more that is next.

(11:38):
I'm Chris Merril. I am six forty Live everywhere on
the iHeartRadio app. Chris Merril Am six forty more stimulating
talking on demand anytime and the iHeart Radio App. I
love talking about the AI stuff, and we'll talk about
how it's invading your life even a little bit more
after Mark's eight thirty news. But there is essimism around AI,

(12:01):
not just reticence, but pessimism. So first of all, I
gotta give you a quick update on this so you know,
Sora too makes the the the videos and I don't
know if you've seen this on my Instagram feed is
filled with mister Rogers talking to Tupac and Jeffrey Dahmer
and Bob Ross and really, yeah, and you've seen these.

Speaker 5 (12:26):
Yeah, it's a it's it's really uncanny at this point.

Speaker 3 (12:29):
It is, and it's it's funny. It's become basically it's
like a meme, right, because it's it's.

Speaker 2 (12:36):
It's putting.

Speaker 3 (12:37):
It's putting mister Rogers in situations. Mister Rogers would never
be in like a w w E ladder match with
Bob Ross or laughing about cannibalism with Jeffrey Dahmer, right,
and we go, that's not that's not mister Rogers. That's
so the antithesis, mister Rogers. That's why it's funny, right,
that's irony. However, it's kind of like Liam Neeson and

(12:59):
Naked Gun, right, It's like, ooh, that is funny because
it feels so out of place. However, there's some pushback.
I saw one with Martin Luther King too, and now
the family has complained about it, so now open AI
and there's sore a video machine is removing Martin Luther
King because the family's complained about it.

Speaker 2 (13:18):
However, some people are.

Speaker 3 (13:20):
Embracing this, including Suzanne Summer's widower, two years after her death,
he is creating a Suzanne Ai twin, saying it's really interesting,
a really interesting project.

Speaker 2 (13:35):
They were married for fifty five years.

Speaker 3 (13:36):
She died, of course, just two years ago breast cancer,
and he says it was Suzanne.

Speaker 2 (13:41):
I asked her a few questions.

Speaker 3 (13:43):
And she answered them, and it blew me away, and
everybody else was blown away as well. So anyway, he's
all excited about this Ai clone of his deceased wife.

Speaker 2 (13:55):
I feel like that's.

Speaker 3 (13:56):
A slippery slope where you're not giving yourself actual closure
if you're careerating this replica, not to mention a bit
macabre kind of, but also, wouldn't you want to meet yourself?
I mean, isn't there a little bit of you that
would be curious about meeting your clone before you're done?

Speaker 4 (14:14):
Maybe not, I am, I'd be interested. I'd be like what,
I don't even know? Yeah, what do you think you're
gonna find out?

Speaker 2 (14:24):
I don't know.

Speaker 3 (14:25):
Might learn something about myself, Yeah, I might learn something
I didn't know about myself.

Speaker 4 (14:30):
You know, there was a scene in the Have you
been Keeping Up on the Peacemaker Show?

Speaker 2 (14:35):
Yeah? I love that HPO.

Speaker 4 (14:36):
Yeah, So they go, they go to another multiverse where
they all see other versions of themselves.

Speaker 2 (14:41):
John Cena runs into I thought for sure.

Speaker 4 (14:43):
When the Vigilante met his alternate reality self, they were
just gonna have sex.

Speaker 2 (14:49):
It felt like it could go. I wouldn't definitely surprise
the guy is a bit of a weird.

Speaker 4 (14:53):
I mean, that's not how it turned out, so that
wasn't a spoiler.

Speaker 2 (14:56):
So but I figured that's all they do.

Speaker 3 (14:58):
So Americans are becoming pessim and sorting themselves into two camps.
You got your pro AI camp and your anti AI camp,
And there is some backlash toward the tech companies for
over promising what AI can do. This is similar to
what Mark was talking about too. There's also a fear
that AI is going to erode our humanity, although at

(15:18):
this point, if we take an honest look, what is
there left to save. I've become the older I get,
the more I become a nihilist. And a number of
people are very angry that they don't have more choices
to use AI because it's being thrust upon us and
we just don't inherently trust automation.

Speaker 2 (15:35):
And why is that?

Speaker 3 (15:36):
I want you to think about WEIMO Statistically, WEIMO is
way better at driving than humans. But you hear these
news items about a wrong way weimo, or a car
that's been driving in circles for no reason. Meanwhile, you
got drunk drivers, you got speed demons, you got textures,
you have unstable pilots. We had tragic stories just today

(15:57):
about people being killed on the freeway because somebody else
decided to take their lives.

Speaker 2 (16:05):
In their own hands.

Speaker 3 (16:07):
So we've got these people behind the wheels at every
moment of every day, posing a much bigger risk than
the driverless way MOS. But with that, there is an
understanding that we all have something to lose.

Speaker 2 (16:21):
There are no stakes for the machines.

Speaker 3 (16:24):
So when AI starts hallucinating on your big presentation that
you're supposed to give and it gives out misinformation, what's
it lost? Nothing you've lost. You're the one who's taking
the risk. You're the one that has skin in the game.
And I would also say that our experiences with the
blue screen of death and computers randomly crashing and the

(16:47):
most inopportune moments are adding to our anxiety about trusting
AI for anything.

Speaker 2 (16:54):
So there are a number of reasons that people are becoming.

Speaker 3 (16:56):
Skeptical older Americans, especially, I think, in many cases because
they're not keen on new technology. But I also think
that experience with machines in the past has given us
ample reason, justifiably so, to be hesitant to offer too
much trust. But you know where, I'm gonna use it,

(17:20):
and I think it's gonna be great. I have a
mandatory self evaluation for work that I have to fill out,
and they always ask the dumbest, most corporate questions, and
I think I'm gonna have AI give it the dumbest,
most corporate answers. I'm kind of thinking that tomorrow night
we might do my employee evaluation live on the air Mark.

Speaker 2 (17:41):
I haven't decided yet.

Speaker 4 (17:42):
Oh, that could be fun. Or you could just let
me do the answers. That's not a bad idea. I
think we could have some fun with this. It really
could be fun. I'll tell you, though, it's so much
less satisfying giving the finger to a way moat car
that cuts you off.

Speaker 2 (17:56):
Yeah.

Speaker 4 (17:56):
Really, you got to take that into considering. That's a
good point. Yeah, but you're far less likely to be shot.
That's true too, But you've been willing to take the chance.

Speaker 3 (18:05):
Meanwhile, you're gonna get a chance to go on an
AI shopping spree. Whether you like it or not, you'll
find out what they're buying you next, it's your credit card.

Speaker 2 (18:13):
Anyway, it's next.

Speaker 3 (18:14):
Chris Merril kfi AM six forty live everywhere on the
iHeart Radio app.

Speaker 1 (18:17):
You're listening to kfi AM six forty on demand.

Speaker 2 (18:23):
Hi, good evening.

Speaker 3 (18:23):
I'm Chris Merril kfi AM six forty. More stimulating talk
and real people, not AI voices.

Speaker 2 (18:32):
I'm concerned. I've had.

Speaker 3 (18:35):
I've had some of these ad agencies. Oh man, you
know what they're up to. I've had ad agencies that
have said, hey, why don't we just why don't we
just do an AI copy of your voice and then
we can just we can just license your voice. We'll
do like a one time license on your voice, and
then we can just make the commercials here and you

(18:55):
don't have to do anything with it. I'm like, no, No,
you get into an AI likeness of my voice and
then use it for whatever you want.

Speaker 1 (19:04):
No.

Speaker 3 (19:05):
And of course they're thinking, oh, yeah, but we could
just save some money. Why are we paying you every
month when we could just pay you one time and
then have your voice for everything we ever want ever, I'm.

Speaker 4 (19:14):
Looking forward to hearing your voice on a variety of projects.
Or how much embarrassing? How much should I sell my
voice for? Oh? As much as you possibly can?

Speaker 2 (19:23):
I mean, I don't want to be that guy with me.
Oh I just had a little all right there, keep
it my internet? Yeah, okay? How much? How much should
I sell my voice for?

Speaker 3 (19:32):
Because I don't want to be that guy that does
the Nike swoosh and makes two hundred bucks.

Speaker 4 (19:36):
Oh, it's not going to be the Nike swish. It's
going to be hemorrhoid cream. It's going to be worse
than that.

Speaker 3 (19:40):
No, but you know what I mean, the guy he
did the Nike swoosh and Nike made billions off of that,
he'd be two hundred bucks. You know, I don't want
to be that. So how much should I sell my
voice for? I mean I might sell a lifetime license
on my voice?

Speaker 2 (19:51):
Ten million.

Speaker 4 (19:53):
I don't think any figure is going to be enough,
because you have no idea how things are going to
pan out.

Speaker 2 (19:57):
Yeah, but I don't care at that point. If I
got ten million, I can go buy Maybe ten million
is not enough, Maybe it's one hundred million.

Speaker 3 (20:04):
I gotta I have to have enough to buy an
island where nobody can find me and then accuse me
of being whatever they're gonna use my voice for. I'm
sure they would immediately turn me into some sort of
a racist, because that's what people do.

Speaker 2 (20:17):
Oh so, oh yeah, they'd just.

Speaker 3 (20:19):
Be like, oh, I'm gonna use his voice and I
can make him say racist things and people will think
that he was a Trump nominee for something.

Speaker 2 (20:25):
Right, They're just gonna say horrible stuff.

Speaker 4 (20:27):
Oh so something like this is Chris Merrill for your
Confederate flag.

Speaker 3 (20:31):
Oh yeah, confer real patriots love Confederate flags. Okay, you
get your Confederate flag from the Patriot Flag Company.

Speaker 4 (20:42):
Yeah, get as much money as you can. Yeah, enough
to just disappear.

Speaker 2 (20:45):
That's what I want.

Speaker 3 (20:47):
So Open a Eye has now turned chat gpto into
a shopping assistant as well, and it doesn't want It
doesn't even wait for you to ask. So there's a
new protocol with Stripe, Etsy and soon Shopify, and now
they're going to suggest and even complete purchases directly in
your chat. Walmart is leaning into AI powered retail as well,

(21:09):
pushing rewards and conveniences to to help keep up, so
AI becomes your personal shopper.

Speaker 2 (21:17):
So there you go.

Speaker 3 (21:19):
You'll have every opportunity to just tell AI what you
want for dinner, or say AI, what should I have
for dinner, and it will say, oh, you can do
this and this and this and or Ai, I need
a gift for my I need a gift from my
parents for Christmas, and I'll go great gifts are on
the way. I told you during the break, as long
as we figure out how to use it, like how

(21:40):
iron Man used Jarvis.

Speaker 2 (21:42):
We're cool.

Speaker 3 (21:43):
You're aware that's fantasy, right, It's becoming more of a
reality as we speak. It's Science Future, Mark, It's comic books,
It's science future.

Speaker 2 (21:53):
That's what it is. I am iron Man. Okay, Okay,
that got a little weird. Yeah.

Speaker 3 (22:01):
Talking to my son during the commercial break is uh,
you know I work from the I work from the
home studio. And I said, I said, hey, I was
just talking about you, about how you don't like to.

Speaker 2 (22:10):
Hear AI generated voices on your.

Speaker 3 (22:12):
Videos and things like that. He goes, no, I skip it.
I turn it off right away. And he said, you
know the other thing is that, because he's of course unemployed,
he's a gen zer uh. He said that AI ruins language.
He said, it's ruined the English language. And I thought, hmm,
I'm pretty sure the Zoomers did that. And he said no,
because it writes too professionally.

Speaker 4 (22:33):
Yes, I've been reading about this.

Speaker 3 (22:35):
So if he writes a letter, he sends the letter out,
you know, his CV or whatever. And he said, it
sounds like it was written by AI because he's grammatically correct,
he's got sentence structure, all of the things that we
have to do, he says. He says, So I think
about putting little mistakes in there. I go, yeah, but
you don't want to put mistakes in your cover letter.
You want to have your cover letter be right. You

(22:56):
want it to sound professionally, he says. But if it does,
then they think I told AI to exactly.

Speaker 4 (23:00):
If you turn in clean copy, now people suspected of
being AI. So I'm seeing more posts from teachers who
look for typos or grammatical imperfections in their students' papers
so that they know the kids weren't cheating with AI.

Speaker 3 (23:13):
But if you do that, you're not doing it correctly.
So I burn it all down. It's crazy.

Speaker 6 (23:21):
I'm I've been known throughout the only thing academically I've
ever been good with was grammar and spelling.

Speaker 2 (23:28):
Yes, So now you're telling me the one.

Speaker 6 (23:31):
Advantage that I have is that I can write drammatically correct.
That's the one thing I'm really really good at. Now
I'm going to be like Marked down, I'm gonna get
like you are now, sus, Yeah, you're sus. And I
always use the Queen's English and texts. Now, people are
going to think they're getting spammed.

Speaker 4 (23:48):
It's actually the King's English.

Speaker 2 (23:50):
Now, Oh, she's right, true, And I'm.

Speaker 5 (23:53):
A former copy chief.

Speaker 4 (23:55):
So how do you think I feel about AI?

Speaker 2 (23:56):
That's true. You're screwed.

Speaker 3 (23:57):
Yeah, actually, Mark, she's right. You just keep calling it
the Queen's English and we'll all know that it's you.
It's just a spiral we can't pull out of. We're
so screwed, you know. Speaking of being good at something,
I think that when it comes to resistance, people are
pushing back on the AI, the AI evolution or the
AI integration into our lives, and it's happening everywhere. It's

(24:19):
helping in certain areas, you know, like doctors are able
to speed through the evolution of blood analysis and DNA,
and you know scientists are using it to their advantage
all these different things. But I think that being good
at something is what makes us unique and special and valuable,
and it gives us a sense of self worth. Think

(24:41):
about this. If anybody could hit a home run, the
major leagues wouldn't exist. If anybody could hit the ball
five hundred feet we sho heo Tani would not be special.

Speaker 2 (24:54):
Right. If anyone could paint.

Speaker 3 (24:57):
The Mona Lisa, then Da Vinci would have just has
been a day laborer. He never would have gotten a gig.
They would have been like, oh, you do sculpting. Yeah,
we don't really need that. We've got a three D
printer for it. But what we do need are construction workers.
So we're gonna give you a union base and just
show up at six o'clock tomorrow and bring your travel.

Speaker 4 (25:16):
Yeah, that wall over there needs a code of beige.
That's exactly what would have happened. That would have been
Da Vinci.

Speaker 2 (25:21):
Right.

Speaker 3 (25:22):
If you are really good at your craft, but suddenly
anybody can be good at your craft, now you're a
part of the herd.

Speaker 2 (25:28):
And too much homogenization, even if it.

Speaker 3 (25:30):
Elevates the overall quality of life, even if we are
surrounded by beautiful sculptures, it doesn't matter because now what
you've done, while you've elevated our knowledge base or our
artistic understanding. If everyone can do it and no one
is special, we've got a Soviet style communal system where

(25:51):
you're just a pawn for the oligarchs. There is no
avenue to stand out. The America dream of leaving something
better for your kids is dead. These, to me, are
the more the larger philosophical issues that I think that
the AI overlords. I don't know if they care because

(26:14):
they're gonna be the oligarchs who are pulling the strings.
But it's certainly what I think our elected officials need
to pay attention to. And sadly, I think many of them,
being octagenarians by no fault of their own, have no grasp.

Speaker 4 (26:28):
But even octagenarians have heard their whole lives the old
truism that nothing worthwhile is easy, and now that's being
put to the test.

Speaker 3 (26:36):
But if they don't understand it, right, do you remember
when they were when we had the hearings. They were
talking to Mark Zuckerberg, and they were talking with the
tech guys, and they had no clue. They had no
clue what they were talking about. They didn't know the
difference between an IP and a ISP.

Speaker 4 (26:52):
No, no, no, And Zuckerberg's never had a clunker of
an idea, remember when he thought we were all gone?

Speaker 2 (26:56):
Hasn't.

Speaker 3 (26:57):
But how do you call him out on it if
you don't know what a good idea versus a bad
idea is because it's out of your wheelhouse. Well, this
is the issue with Congress.

Speaker 4 (27:07):
This is one of these situations where somebody should have
shown up with footage of when he thought we were
all going to be doing business with avatars.

Speaker 2 (27:15):
Like one hundred percent, but they didn't know.

Speaker 3 (27:19):
I look, Mark, you would be better in Congress than
many of our octagenarians because.

Speaker 2 (27:23):
They absolutely have no idea.

Speaker 3 (27:26):
They don't know what it means to have a Facebook page,
or that you have to be over sixty to now
to have a Facebook page.

Speaker 2 (27:32):
They have no clue.

Speaker 4 (27:33):
Oh god, is that true?

Speaker 2 (27:34):
Yeah? Pretty much?

Speaker 3 (27:35):
Okay, Facebook is for the for the older generation. Oh no, yep, yep.
You have to be doing dance videos on TikTok if
you want to be hip. I don't do the talk. No,
you're losing or.

Speaker 5 (27:48):
Yeah, I don't use social media I'm a catch.

Speaker 2 (27:52):
Okay, Grandma.

Speaker 4 (27:53):
I've never used a dating app either.

Speaker 2 (27:55):
Uh okay, Well that's because you're a chick.

Speaker 3 (27:58):
Yeah, that's because you're just gonna you're gonna go get
free drinks.

Speaker 2 (28:01):
Yeah, I get you. I get you.

Speaker 3 (28:05):
Look, I've never used a dating app because that was
because I tricked the woman into marrying me before they
were a thing. But god, if I hadn't, I would
be so screwed now that AI. It watches the road,
and it watches you, and one day soon it's gonna
decide who gets to keep driving and who doesn't. So
why if everything is safer, is your insurance still going up?

(28:26):
You'll find out next time. Chris Merril KFI AM six
forty live everywhere on the iHeart Radio. Hi, Chris Meryl
Cafi listen anytime on demand the iHeart Radio app. One
company is getting ready to can a whole lot of
people and then spend a good jillion dollars. All right,
I'll tell you where that's happening. It's a lot closer

(28:48):
than you think. That's coming up here after nine o'clock.
Massive plot twist in Hollywood.

Speaker 2 (28:55):
AI is.

Speaker 3 (28:57):
AI is confusing for an awful lot of people, not
the least of what you are businesses who are trying
to do two things at once, and this is where
things can get really tricky. I think it's probably safe
to say that we all trust implicitly insurance companies.

Speaker 2 (29:12):
Right.

Speaker 3 (29:12):
No one's ever had an issue with an insurance company.
Nobody ever thinks, wow, I think their customer service sucks,
and I feel like they are not in this for
my best interest, but rather just to take money from me.

Speaker 2 (29:24):
Totally true.

Speaker 3 (29:27):
At no point in time is any insurance company ever
try to.

Speaker 2 (29:31):
Hide or not pay claims, or to withhold or.

Speaker 3 (29:36):
Not give you pre authorization for instance, or maybe to
not even pay out on your auto claim or not
even pay your medical bills after an auto accident that
has never in the history of mankind ever happened. Well,
now there's another issue, and that is that more and
more car companies are beginning to integrate AI into their dashboards.
So some of you may have some of the newer

(29:58):
technology in your vehicles. I do, and I love it,
absolutely love it. It is yet to be AI driven
in my car, though, so I've got like the crash
alert system, I've got the adaptive cruise control I've got
the Liane Keeping assists, but all of those things are
done using cameras and feedback and more typical computer programs

(30:23):
that use lid R and then process the information they're
not using AI. You know how I know because when
a car starts to turn in front of me, my
car starts to freak out and slam on the brakes,
even though there is not a chance in hell I'm
going to run into that car. It's way down the road.
But my car says, oh, the car in front of
you is slowed down. The car in front of you

(30:44):
is turning, and it hasn't cleared the frame for the
light R. It hasn't figured out that that car is
no longer a threat. AI would process that process. Is
better figure that out then. On my way home from
work today, I nearly hit a car who was turning
into a driveway.

Speaker 2 (31:06):
That was my fault. It was not me.

Speaker 3 (31:08):
I nearly hit it, not because I didn't see that
they were turning, but because I thought they were turning
at the street just after the drive Now, I wasn't
in a case where I had to slam on the brakes.

Speaker 2 (31:17):
There was no squealing of the.

Speaker 3 (31:17):
Tires, but I did pass the car thinking, oh, that
was closer than I wanted. Did my car warn me
that I might be too close to that one? Now,
but a car that is still one hundred yards off
that's making a turn, it freaks out about AI would
correct that. It would do more analysis rather than just
relying on whatever the information, the feedback and then the response.

(31:42):
AI would process that. Once you have AI assisted driving,
and some of the new vehicles are already being integrated.
Beyond just what I'm discussing here, Now you've got an
issue where suppose that there is an accident. Insurance companies
are trying to figure out who do they blame? So
AI was in control, and if AI is in control

(32:05):
with say a Weymow or something of that sort, right,
and there's an accident, who do you blame?

Speaker 2 (32:10):
And if AI driving.

Speaker 3 (32:12):
Is actually safer, how do you keep charging more for
your car insurance? Well, they seem to have figured out
how to do this.

Speaker 2 (32:21):
One.

Speaker 3 (32:22):
They encourage the integration of AI because it is statistically safer. Two,
they raise your price because there's new technology and you
don't know, so they say, ooh boy, you know, if
there is an accident, it's gonna be very expensive to fix,
and you know you're always in charge of that vehicle,
so we're gonna have to up your insurance rates.

Speaker 2 (32:42):
I mean, who can blame them? They figured out how
to get the best of both.

Speaker 3 (32:46):
Worlds, fewer accidents to pay out on and charging you more.
And who are the insurance companies to say no to
more profit? I've never met one, all right, Hollywood's latest
plot twist is simple.

Speaker 2 (33:03):
Fire.

Speaker 3 (33:03):
The cast and crew, charge the audience more, roll the
credits boom.

Speaker 2 (33:07):
Who's getting axed? That's next.

Speaker 3 (33:09):
I'm Chris merrill I AM six forty Live everywhere, the
iHeartRadio

Speaker 1 (33:13):
App KFI AM six forty on demand
Advertise With Us

Popular Podcasts

Stuff You Should Know
Dateline NBC

Dateline NBC

Current and classic episodes, featuring compelling true-crime mysteries, powerful documentaries and in-depth investigations. Follow now to get the latest episodes of Dateline NBC completely free, or subscribe to Dateline Premium for ad-free listening and exclusive bonus content: DatelinePremium.com

CrimeLess: Hillbilly Heist

CrimeLess: Hillbilly Heist

It’s 1996 in rural North Carolina, and an oddball crew makes history when they pull off America’s third largest cash heist. But it’s all downhill from there. Join host Johnny Knoxville as he unspools a wild and woolly tale about a group of regular ‘ol folks who risked it all for a chance at a better life. CrimeLess: Hillbilly Heist answers the question: what would you do with 17.3 million dollars? The answer includes diamond rings, mansions, velvet Elvis paintings, plus a run for the border, murder-for-hire-plots, and FBI busts.

Music, radio and podcasts, all free. Listen online or download the iHeart App.

Connect

© 2025 iHeartMedia, Inc.