All Episodes

November 21, 2025 46 mins

On this episode of The Middle, we're asking if you're excited by the possibilities that AI will bring, or if you're afraid that it will destroy us all. Jeremy is joined by Andy Mills and Gregory Warner of The Last Invention podcast, which was just named by Apple as one of the best podcasts of the year. DJ Tolliver joins as well, plus calls from around the country. #AI #LLM #hacking #Anthropic #ChatGPT #Claude #artificialintelligence #humanity

See omnystudio.com/listener for privacy information.

Mark as Played
Transcript

Episode Transcript

Available transcripts are automatically generated. Complete accuracy is not guaranteed.
Speaker 1 (00:00):
Support for the Middle comes from the stations that air
the show and from you. Thanks for making a donation
at listen toothemiddle dot com.

Speaker 2 (00:13):
Welcome to the Middle. I'm Jeremy Hobson with our house
DJ Tolliver and Tolliver. You know, a couple of weeks ago,
we were in Dallas and we did the show about
whether people are worried that artificial intelligence is going to
take their jobs?

Speaker 3 (00:26):
Yeah, and I remember Andrew Yang saying he agreed that
one hundred million jobs could be lost in the next
ten years or so.

Speaker 2 (00:31):
So I'm still sifting.

Speaker 3 (00:32):
Through all the reactions to that because people still got
a lot to say.

Speaker 2 (00:35):
So we got so many responses and so many voicemails
too that come in during and after the show people
can't get through. Here are some more of them.

Speaker 4 (00:43):
Hi, my name is Chris. I'm calling from Evanston, the
therapist worried about Hey, I taking jobs and I'm concerned
that that's something that's just going to continue to be
like the easier thing that both companies and individuals turned to.

Speaker 5 (00:56):
This is Tony from Hendersonville, Tennessee, worried about my grandchildren
who are graduating from college in the next three years
finding jobs, because the entry level jobs will go to AI,
and then we'll never get that experience.

Speaker 6 (01:10):
My names Ian Wantley. I'm from Salt Lake City, Utah.
I think that AI can be really useful. The problem is,
in our greedy society, we often just try to minimize
costs as much as possible. AI will replace jobs, but
I don't necessarily think it has.

Speaker 2 (01:29):
To, so Tolliver that issue of jobs being lost to
AI is only part of the story. There are much
bigger fears about artificial intelligence, chiefly that it could get
so capable, so smart, so fast that it could be
impossible to control. Now, these fears are not new. If
you've seen the Matrix, or the Terminator or the character
how nine thousand in the nineteen sixty eight film two

(01:52):
thousand and one A Space Odyssey.

Speaker 3 (01:53):
Well there's Haley Jelosman and artificial intelligence list goes on.

Speaker 2 (01:57):
And it does, but listen to hell here open the
pod bay doors.

Speaker 7 (02:01):
Well, I'm sorry, Dave, I'm afraid I can't do that.

Speaker 8 (02:07):
What's the problem. I think you know what the problem
is just as well as I do.

Speaker 2 (02:14):
Okay, so that was fiction, but now things are getting
more real. This was on CBS's Sixty Minutes just this
last weekend. The CEO of the AI company Anthropic, was
interviewed by Anderson Cooper. You believe it will be smarter
than all humans.

Speaker 9 (02:29):
I believe it will reach that level that it will
be smarter than most or all humans in most or
all ways.

Speaker 2 (02:35):
Do you worry about the unknowns here?

Speaker 10 (02:37):
I worry a lot about the unknowns.

Speaker 3 (02:39):
I don't think we can predict everything for sure, but
precisely because of that, we're trying to predict everything we can.

Speaker 2 (02:47):
Well, we want to know this hour about your predictions,
your hopes, and your fears about AI. And I'm adding
in hopes because I don't want to tip the scales
just in a negative direction.

Speaker 4 (02:55):
Here.

Speaker 2 (02:55):
If you're excited about AI and you're not afraid it's
going to destroy humanity, we want to hear from you
in either case, Tolliver, how can people reach us?

Speaker 3 (03:02):
You can call us at eight four four four Middle
that's eight four four four sixty four three three five three,
or you can write to us that listen to the
Middle dot com, or comment on our live stream on
YouTube and we already got a lot coming in, so
get your comment in all right.

Speaker 2 (03:14):
Joining us now are Andy Mills and Gregory Warner, the
journalist behind The Last Invention podcast, which looks at the
history of AI and where it could be going. Andy
and Gregory, great to have you on.

Speaker 11 (03:23):
Welcome, Thanks, great to be here.

Speaker 10 (03:25):
Fan of the show, so happy to.

Speaker 2 (03:27):
Be thank you, Thank you, and a fan of your
show too, because it's really amazing. People should check it out.
The Last Invention. Andy, I went back to that nineteen
sixty eight clip of how but AI goes back further
than that, and so did the fears about what it
could do.

Speaker 12 (03:41):
Absolutely, it's one of the shocking things that I didn't
know before reporting this series that you go all the
way back to the nineteen forties into Alan Turing, who's
kind of famous for being the father of computer science,
and this incredible story that they've made into a movie
about how he and his team of code breakers used

(04:01):
an early form of a computer to crack the Enigma
code and help the Allies overcome the Germans in World
War Two. What I did not know is that they're
in the late forties, Turing looking at this computer, this
big contraption that is so far from the computers that
we use today, was already envisioning a day when that

(04:24):
computer would think, and by nineteen fifty two he was
already talking publicly about this theory that when it could think,
it would probably think better than us, and it would
likely that the computers that the digital intelligence of the
future would likely take control.

Speaker 2 (04:45):
Okay, so that was way back then. I mean, kind
of amazing that they even thought about this at that point.
But Gregory, what are the biggest fears now as you've
been reporting this about what AI could do and how soon?

Speaker 11 (04:58):
Yeah, and before we get to the fears, I I
think just the realities are a couple of things about AI.
First of all, it's an unusual technology and that we
don't know what it can do before we make it.
I mean, just imagine we didn't know what a car,
a new update of a car, or a new operating
system would do, what its capacities were, right until we
put it out there. So that's that's interesting, and that's

(05:21):
you know, and to some people, very scary. Another thing
is we don't know how it works. We don't just
like a child thinks, and you could raise a child
in your house and not really know why this child
suddenly thinks the way they do. We also do not know.
We cannot look into the black box of AI and say, oh,
it it likes this preference, It has a preference for

(05:43):
this because if this reason that, here's the code and
we can change that. So we don't know how it
works and just that it's a black box. But finally,
we're on a trend line that it is getting smarter
and smarter. And you said, the goal and the conversation
is that this could one day soon be smarter than
any human, smarter ultimately than all of humanity. And so

(06:05):
at that point, what does that mean for human society
if we have a tyke.

Speaker 2 (06:10):
Let me just drill on something you just said there.
You said, we can't we don't know what we can't know,
but it's created by humans. They don't have a way
to go in and say, oh, actually, I mean already
we've had situations where AI will do something that the
creators don't like and they'll say, like tell people to
you know, do bad things, and then they'll go in
and they'll fix that problem with it. Andy, Yeah, but

(06:30):
how come we can't do that at this point?

Speaker 12 (06:32):
Well, what people think about when they think about AI
right now is they think about these chatbots. But these
chatbots are not the thing that the AI companies are building.
Think of them more like a website, and what the
website was to the Internet, these chatbots are to the
artificial intelligence. So yes, you could do things to the website.

(06:55):
They have not quite dials, but they have something like
dials that they can turn to tweak the preferences of
the chatbot. For example, over the summer Open Aiyes chatbot
chat GPT was behaving in a way that people were
describing as sycophantic. That every time you asked it a question,

(07:16):
it would go, jeremy amazing question, maybe the most brilliant
question ever. And the reason that it did that is
because they were tweaking it for people's preferences, and they
realized that people liked it to be flattered, and so
they inside the code tried to tweak it as much
as they could so it would be flattering, and they
tweaked it too far, and then they went in and
they could tweak it again. But even when I say tweak,

(07:37):
it's not a technology that is like a usual product
where you create like an algorithm and then you have
an app. This is something that is so much more
complex and so much more mysterious, even to the very
people who created it and the people who are running it.

Speaker 2 (07:58):
Now well and Gregory. One of the things that you
get into in the podcast is about the idea of
super intelligence, when AI becomes basically smarter than us at everything,
which in some cases already is, but in some cases
it's not yet. But how close are we to that
idea of super intelligence?

Speaker 11 (08:18):
I mean, saying how close we are is very difficult.
There's a lot of predictions out there.

Speaker 10 (08:23):
I do.

Speaker 11 (08:23):
I can tell you that every that people think we're
a lot closer than even they thought a year ago.
And these are the people closest to the machine. Predictions
range I think from anywhere from March twenty twenty seven.
That's Dario Amide, the CEO of Anthropic, who you quote
at the top of the show. There's others who say
it's further off than that. There's others who don't put

(08:45):
a prediction, and there's those who say, you know, even
a term like smarter than all of humanity is hard
to define. But I do think the like Jeffrey Hinton's
Jeffrey Hinton is a.

Speaker 2 (08:57):
Godfather of AI.

Speaker 11 (08:58):
We'll talk about it him a little bit later, he's
somebody who says, you know, the people designing this, the CEOs,
they are used to people smart, smarter than them following
their orders. You know, that's the nature of being a CEO, right,
You hire some people that are smarter than you and
hopefully and so they don't see this flip where something

(09:20):
smarter than us becomes uncontrollable. But that really is the debate.
Is something smarter than us ever controllable?

Speaker 2 (09:28):
Well, and let's talk andy about the bright side of this.
What about people who see a utopian future where we
don't have to worry about AI taking over and turning
us into their servants.

Speaker 6 (09:41):
Yeah.

Speaker 4 (09:41):
I mean.

Speaker 12 (09:41):
In the podcast The Last Invention, which I hope everyone
will listen to and really enjoy and share with all
their friends, we dive into the debate that's happening between
the people who are closer to this technology and they
kind they split up into essentially three camps. There's the
AI doomer camp, saying that this is going to be
more intelligent than we are, we will not be able

(10:03):
to control it, and the best thing we could do
is stop right now before we hit artificial general intelligence,
that thing that Ama Day was talking about at the
top of the show. Then there are the AI scouts,
as we call them. These are the people who say,
artificial superintelligence may be the best thing that could ever
happen to us. It could solve all these difficult collective
action problems that we have, like climate change. It could

(10:26):
come up with clean renewable energy resources that if you
have something that is a super intelligence. Think about the
fact that you a lab right now of the most
capable scientists. At best, you've got eighteen, twenty thirty, maybe
two hundred of these PhD superintelligent people. But they have
to go to sleep, they take weekends off, they take

(10:48):
holidays off. They're trying to solve the world's problems. But
with limited resources, these superintelligences twelve four hours a day
will be solving these problems right exactly. And then they
say yeah. The AI scouts say, though, that that is
something we really need to get prepared for. We're not
prepared for it now, and if we don't get prepared,

(11:08):
we may face.

Speaker 10 (11:10):
A catastrophic outcome.

Speaker 12 (11:12):
The most optimistic people we spoke to, those are often
called the accelerationists.

Speaker 10 (11:15):
They're the people who think we should let it rip
that around the corner.

Speaker 12 (11:20):
In the next few years, we may be experiencing something
so profound that's going to utterly change the world and
change how we as human beings relate to each other,
because it'll be the end of scarcity.

Speaker 10 (11:35):
It'll be the end of.

Speaker 12 (11:37):
This obligation that we have to feel like we have
to work, that we have to work forty hours.

Speaker 10 (11:42):
A week to get a paycheck to rent a place.

Speaker 12 (11:45):
They say that we can experience a level of humanity
so much greater than that. Some of them, including Jack Clark,
who also works at Anthropic, He says that he thinks
that in a world post scarcity, a world with super intelligence,
that we will be more peaceful.

Speaker 2 (12:00):
Someone who has been at the center of this conversation
is Elon Musk, who was one of the co founders
of open AI back in twenty fifteen.

Speaker 3 (12:07):
Yeah, but in twenty seventeen he was really warning about
the dangers of AI. Here he is speaking at a
meeting of the National Governors Association.

Speaker 13 (12:14):
I have exposure to the very most cutting edge AI,
and I think people should be really concerned about it.
I keep soundingly long, bell but JN Tel people see
like robots going down the street killing people like they
don't know how to react, you know, because it seems
so ethereal and I think we should be really concerned

(12:37):
about AI.

Speaker 2 (12:39):
Well, Musk left Open AI in twenty eighteen. He's now
got his own competitor AI company called x Ai, so
he has sort of changed his tune a little bit.
Or maybe he's still working and he just doesn't talk
about it as much. But we'll be right back with
your calls coming up on the Middle. This is the Middle.
I'm Jeremy Hobson. If you're just tuning in the Middle

(12:59):
as a national call in show, we're focused on elevating
voices from the middle geographically, politically, prophically, or maybe you
just want to meet in the Middle. This hour, we're
talking about your hopes and fears about artificial intelligence with
the hosts of the podcast The Last Invention, Gregory Warner
and Andy mill Tolliver. The number again, please.

Speaker 3 (13:17):
It's eight four four four Middle. That's eight four four
four six four three three five three. You can also
write to us at Listen to the Middle dot com
or on all social media.

Speaker 2 (13:26):
And the lines are full. Let's go to Stan who's
in Longmont, Colorado. Stan, your hopes and fears or fears
about AI.

Speaker 14 (13:35):
Ahi, thanks for taking my call. I'm kind of like
right in the middle with being hopeful or fearful about
artificial intelligence. I think that as a tool for humanity,
it is what we make of it, and it is
what we guard against for it, the same as like

(13:55):
a weapon or explosives, right, And if we put in
the time and effort and intelligence to put in intelligent guardrails,
then it could be a very useful, benning, very beneficial tool.
But if we just kind of rush into development for
the sake of development, which kind of is the historical

(14:18):
trend of late with our economy, then I feel like
artificial intelligence is based in computers computer networks currently, and
what happens if it becomes able to program itself update
it's code, and it decides to start causing havoc with

(14:40):
our computer system, then our computer.

Speaker 2 (14:42):
Networks, good point, Stan, Gregory Warner. Does it update its
own code?

Speaker 10 (14:48):
Yes?

Speaker 11 (14:48):
I mean, I mean there's a there's a theory about
a recursive AI that will ultimately be able to make
a smarter AI and then make us even smarter AI,
and then that AI great. So that's this intelligence explosion
that you have you've talked about. No, we're not there
at that point yet. It is an incredibly good coder,
and we know that it's a very good hacker as well,
because this has been reported. I can launch cyber attacks,

(15:10):
so it's coding abilities are not to be questioned. What
I'm also hearing though in the caller is the repeated
concern that it's not over just the technology. It's a
pessimism about human society and what, you know, sort of
what we're going to do with this or we're not
do well with this.

Speaker 2 (15:27):
Denise is calling in from Madison, Wisconsin. Hi. Denise, go
ahead with your thoughts about AI.

Speaker 15 (15:33):
Hi, thank you so much for taking my call. I'm
cautiously optimistic. I think I agree with your guests that
there could be really incredible leaps and bounds in science, environment,
all of that.

Speaker 14 (15:48):
But I'm very.

Speaker 15 (15:49):
Concerned about people taking what they ask chat, GPT or
any of those as fact period and losing the ability
to think critically.

Speaker 2 (16:02):
And deeply interesting. You know Andy Mills on that point.
One of the things I've read about is that it's
going to be good at I think it was Jeffrey
Hinton who said that it's going to be good at
convincing us of things.

Speaker 12 (16:14):
Yeah, I mean, this is the sci fi movie sounding
plot that we are all now finding ourselves living in.
What would it be like for us to no longer
be the most intelligent thing on the planet, and what
relationship will we have with this super intelligence? There are

(16:36):
people who fear that it will come to see us
the way we see other intelligent species. We don't hate
the dolphins, we don't hate the apes, we don't hate ants.
But how much do we think about them?

Speaker 10 (16:49):
What is it we choose to do with our lives?

Speaker 12 (16:51):
The most meaningful things that we do often don't have
much consideration at all for these other conscious creatures and
intelligent creatures. And that is something that the people who
are developing this are grappling with, and have been grappling
with really since nineteen sixty five. It is a long
track record of us wondering what will happen when this

(17:13):
day comes? And what's so strange and frightening and exciting
is that it appears this day maybe here, it may be,
it may be close at hand. And so the time
has come for us to really turn these theories into
a social debate into a social conversation where we can
collectively decide this, because if we don't, it's going to
be decided by a handful of people at these tech companies.

Speaker 2 (17:36):
I said that we were going to allow people. Oh no,
I just lost her. I thought I was going to
go to a hopeful caller who I saw was on
the line. But now we'll after go back to fears again.
Janice is calling in from Westland, Michigan. Janice, are you
fearful about AI or hopeful?

Speaker 8 (17:55):
Mostly? I wonder, like with the chat GTP, with its hallucinations,
I would prefer to call them delusion. How would you
ever be able to train or convince an AI to
have compassion or judgment? I mean they have no concept

(18:16):
of real reality. That's I mean, it's like people assume
that as a moss.

Speaker 16 (18:21):
What is it?

Speaker 8 (18:22):
Three rules? Four rules? You know? Oh?

Speaker 10 (18:25):
Yeah, stuff I do.

Speaker 14 (18:27):
Yeah, I'm old a lot of sciences.

Speaker 8 (18:31):
Yeah, I mean, you know, Susan Calvin or the guy
in uh who cured Hale supposedly in the space side
of see twenty ten. I don't know. They are not people.
They are machines. They do not concept. They do not,
in my opinion, do not have a concept of real reality,

(18:54):
and that's what I'm worried about.

Speaker 2 (18:56):
But do you think that they will? Janis? Are you
worried that they will soon? Maybe in the next couple
of years.

Speaker 8 (19:02):
Well, it's mister Musk has his way, absolutely sorry, he's
concerned they already do.

Speaker 7 (19:09):
I mean his.

Speaker 14 (19:10):
Comments about taxing him.

Speaker 8 (19:11):
Let's see. I don't know. We're's schooling with it.

Speaker 10 (19:14):
I really don't.

Speaker 2 (19:16):
Jenns thank you for that calling Gregory Warner. It's interesting
that she brings up Elon Musk, obviously the richest person
in the world, but also somebody who did raise a
lot of concerns about AI back in the day, jumped
off of the Open AI board, and then now he's
very much in the AI game.

Speaker 12 (19:33):
Yeah.

Speaker 11 (19:33):
Yudkowski's whole journey's quite interesting because he was such a doomer.
He was warning the world, trying to get He even
met with President Obama in twenty fifteen to try to
get him to regulate AI. Gave a speech to the
Governor's Conference in twenty seventeen to try to get them
to regulate this. He said, this is bigger than nukes,

(19:55):
and he felt that nobody took him seriously and started
over an AI allegedly.

Speaker 2 (20:01):
I mean, this was the.

Speaker 11 (20:02):
Mission was to save the humanity, save humanity, uh, to
create a safer AI. And but his transformation has been
quite public and now we see him now and he
says he's he is planning to build super intelligence as
fast as he can.

Speaker 12 (20:20):
I think what's important to note about this, though, if
you go back ten years from now, almost no one,
even in Silicon Valley, believed that something like AGI, a
true thinking machine, was going to be possible in our lifetimes.
You were seen as odd back then. And the true believers,

(20:41):
many of them like Elon Musk, they were the loudest
voices warning about the potential dangers if we did create this.
And if you look at who was saying that ten
years ago, people like Elon Musk, Sam Altman, Dario Amaday,
dim miss As, those are the very people right now

(21:02):
who are leading the top AI labs, who have the
most powerful AI systems. And when it comes to what happened,
did they change their mind? They didn't change their mind
about the danger it poses. They just realized that somebody
somewhere is eventually going to make this and that it's
because it's dangerous. They have determined that the safest thing

(21:25):
that they could do for humanity is make sure that
they make the safe AI before anyone else makes the
dangerous one. And it's why they're so invested in AI safety.

Speaker 2 (21:35):
Okay, Toliver, lots of great calls here. What about some
of the comments that are coming in from people online.

Speaker 3 (21:40):
I'm going to do one negative one and then I'll
hit you with two positives. Okay, I'm gonna pick up
the slack. So Albert in Wisconsin says, AI that is
a sixty four thousand dollars question. First of all, it
seems to be a good excuse for energy companies to
charge more for ready kilowatts, and also he highlights environmental
concerns in addition to that. Okay, the positive one, so
Paul says, Jeremy, humans have always just to change a

(22:00):
I might pose a real challenge, but I trust we
will work through this new issue as we have in
the past. And then, Okay, this actually isn't positive. Frank
from Las Vegas says, can I say that it excites
me that it might destroy us all? Seriously, the self
importance of our species has been to the detriment of
the rest of life on Earth. So a bit of
a mixed bag on that one.

Speaker 2 (22:19):
All right, let's get to another call here. Ben is
calling from Tampa, Florida. Ben, what are your thoughts.

Speaker 17 (22:25):
I'm excited. I feel like if anything, anyone's going to
take over the world, it's going to be humans using
the AI. And I agree with some of the sentiments
about we need to make sure that we're controlling this
so that we can utilize it, because someone's going to
do it eventually. If it's not you know, if it's
not us, or if it's not someone who has maybe
a good kind of direction for it, then it's going

(22:46):
to be someone with a bad direction for it. Also.
To be honest, I don't think I think without the
biological imperative to like persist and to reproduce, we don't
really necessarily have to worry too much about it wanting
to efflave us. It has no reason to want to
have land, that has no reason to want to have territory.
Those things don't exist for it. So it's going to
be you know, people utilizing it in a really destructive

(23:07):
way ultimately, like anything, in my opinion, but I think
that you know that you could say this thing about
any technological advancement.

Speaker 12 (23:15):
Well, just to jump off what the color was, just
saying that is the view that Bill Gates has. That
is the view that some of the most accomplished technologists
of our age have. They believe that this is going
to be powerful. That is, of course, is going to
present risks, and it's going to be frightening.

Speaker 10 (23:32):
But you'll often hear them.

Speaker 12 (23:33):
Say, like the CEO of Google often says that this
is going to be a moment like the discovery of fire.
And fire didn't just make it to where we could
stay up late at night, although of course it did.
It didn't just keep us warm in the cold weather,
but of course it did. Fire gave us a different
diet and fundamentally changed our brains and made us more intelligent.

(23:55):
They're saying that is the equivalent to point to here,
and who knows the amazing things that can have happen
on the other end of that intelligence explosion, And when
it comes to the government, it's really interesting. Unlike any
other big industry in the history since the Industrial Revolution,
as far as as I know, the AI industry is

(24:15):
the only industry that has from its inception been saying
to the government, we want to work with you regulate us.
What we're building is terrifying even to us. Let's work together.
And both the Biden administration and the Trump administration have
taken a lot of meetings they've worked together with these companies.

(24:37):
I will say that there, as of right now, are
no federal regulations on this. I don't believe there are
much that have been put forward. It does feel as
if they are scratching their heads. And one of the
reasons that we wanted to do this podcast is because
we think that this has not yet become the public
debate that it deserves to be, and it has not
even yet become a debate that most of our lawmakers

(24:59):
have take taken up. Although increasingly you know they're tuning
into our podcast. They're they're wanting to figure out their
own positions on this. It has not yet become partisan
and polarized. But I do suspect that by the next
president has action, this will become one of the largest issues.

Speaker 2 (25:16):
Uh, let's go to another caller before we take a
quick break.

Speaker 17 (25:19):
Here.

Speaker 2 (25:20):
Cheryl is in northern Illinois. Cheryl, go ahead with your
thoughts about AI.

Speaker 18 (25:25):
I work for university and because I handle a lot
of data, I have to put in so many hours
of training. This year we had training in AI because
we wanted to protect our data. And one of the
facts that they made clear to us in this training

(25:46):
was that AI, especially the chatbox and that type of AI,
works like any other computer. It calculates a percentage what
is the best back for the next word in the sentence.

(26:08):
And so you know, given that number one, AI is
not going to be something that has consciousness. And number two,
the real threats are first of all, people who are
engaging in magical thinking and making up stories about what

(26:34):
AI can or will.

Speaker 14 (26:36):
Do in the future.

Speaker 18 (26:38):
The real problem is that AI could be used to
alter images and stories so that we're getting, for real,
fake news and we wouldn't be able to tell difference.

Speaker 2 (26:57):
Yeah, Cheryl, A very good point, Gregory. You know that
is a nearer term concern. In fact, we're already seeing
that now. I mean, I think Donald Trump has already
shared fake AI videos of his political opponents. But you know,
that is a near term concern. But that doesn't negate
the idea that there is a bigger concern about AI
doing more than what it's already capable of obviously.

Speaker 11 (27:20):
No, no, absolutely, And I think that this this point
that the caller mentions, it's it's it's such an important
point too, because I remember even last year saying, oh, well,
it's just it's just predicting the next word or the
next token and the next piece of the sentence. So
it's it's not smart, it's just a good probability engine, right,
And but there are very few people who talk about

(27:41):
it that way anymore because it's clearly exceeded that stochastic parrot.
I think this is often the phrase for it, that
that it's now, of course it's a black box, but
the question of of whatever real thinking or real intelligence is.
The other point to make is that the predicting the

(28:02):
next word is just the LM. That's the large language model,
that's just the chatbot.

Speaker 2 (28:06):
Large language model.

Speaker 11 (28:07):
One of the key points of the podcast is AI
is not just chatbots.

Speaker 10 (28:13):
Chatbots are not just a I.

Speaker 2 (28:15):
It's something that I'm going to remember just from this conversation.
A very interesting tolliver, you know, A watershed moment in
our understanding of AI, and it was about the chatbot
came when the New York Times columnist Kevin Russ published
a very bizarre conversation with chat GPT right when the
public got access to it.

Speaker 3 (28:31):
Yeah that's right. Listen to this clip from the Last
Invention podcast where the chatbot, which adopts the persona Sydney,
begins professing its love for Kevin Russ.

Speaker 16 (28:40):
I said, you know, I'm flattered by all the attention,
but I'm married. And it said, well, you're you're married,
but you're not happy.

Speaker 9 (28:47):
You don't love your spouse. Because your spouse doesn't love you.

Speaker 16 (28:51):
You should leave your wife and run away with me, Sidney,
the chatbot, you.

Speaker 9 (28:56):
Just had a boring Valentine's Day dinner together because you
didn't have any fun. You didn't have any fun because
you didn't have any passion. You didn't have any passion
because you didn't have any love. You didn't have any
love because you didn't have me.

Speaker 2 (29:12):
Wow. So poetic and romantic, Tolliver, isn't it.

Speaker 3 (29:16):
One can only dream of a love like that.

Speaker 2 (29:18):
I know, I know. We'll be back with more of
your calls coming up on the middle. This is the Middle.
I'm Jeremy Hobson. In this hour, we're taking your calls
on your hopes and fears, about AI with Gregory Warner
and Andy Mills of the Last Invention podcast. You can
call us at eight four four four Middle that's eight
four four four six four three three five three, or
you can reach out at listen to the middle dot

(29:39):
com before we get back to the phones. One of
the difficult things about putting the genie back in the bottle,
if that's what we wanted to do, is that the
US is not the only country trying to build artificial intelligence.
What did you learn, Andy about how far along other
countries like China are.

Speaker 12 (29:59):
Well, there's there's a lot of speculation about this, and
of course it's one of the reasons that you see
more bipartisan support than you might expect around the acceleration
of this technology in the US is that it appears
from some of the sources we spoke to that China
is nine months behind us. Others say maybe just six

(30:20):
months behind us. And that's when it comes to like
the capability of their AI system. When it comes to
China and them implementing artificial intelligence into their society, into
their businesses, they're actually ahead of us already. They are
more interactive with the society with the technology at this moment,

(30:42):
and so there is a sense that if we were,
let's say, to be cautious to take a beat, to
slow down a little bit, we might be handing it
over to China.

Speaker 10 (30:52):
That being said, I will say that China does.

Speaker 12 (30:55):
It also appear to think that we will probably beat
them in like the AI System's department, and they are
pivoting a lot more of their resources towards the development
of the robots that they believe the American AI developments
and technologies will probably power to do a lot of
the jobs.

Speaker 2 (31:15):
Okay, let's go to Dirk, who is calling from Saint Paul, Minnesota. Dirk,
go ahead, tell us your thoughts about AI.

Speaker 19 (31:22):
I have to admit first of all, and I'm big
ignorant of all the events about AI after chat GPT.
I'm just wondering in the future, if AI is accepted
and does work to some degree without annihilating everybody, will

(31:43):
we have to have proof of income to get a
license to have children, or what will be done with
all the people?

Speaker 8 (31:52):
No.

Speaker 11 (31:53):
I so welcome that, because I think that as journalists
we shy away from those kinds of speculative questions. We
leave them to sci fi. But it's to our detriment,
because the caller is absolutely right. In a world where
there's a superintelligence, and Eliezer Yudkowski likes to say this,
he says, humans take up one hundred watts per person,

(32:15):
so the robots or the superintelligence may not want humans
around or may see a cost to more babies. And
I know this feels very sci fi, but I think
it's so worth just thinking through what our societies might change,
and in a serious way. I don't know about the
license to have children, but what might work mean? You know,

(32:35):
might what might the role of schools be or learning
be when machines can do every single thing better than
we can.

Speaker 2 (32:42):
Tony is calling from rock Ledge, Florida. Tony, are you
concerned about AI getting too powerful?

Speaker 14 (32:49):
A little concerned, but also a little hostful.

Speaker 7 (32:52):
But I just want to go on record and say
that I, for one, welcome our new cybernetic overlord.

Speaker 2 (33:00):
Yeah, why for one thing?

Speaker 14 (33:04):
Rather it not end up like the Sobon project and thoughts.

Speaker 12 (33:08):
I know, yeah, I would like to take this caller's
inspiration to give some optimism. Think about the society that
we live in right now and how many problems we're facing,
how much nihilism is growing, especially among young people who
don't feel like the future is going to be better,
who maybe are addicted to their phones. What we're talking

(33:31):
about here is the opportunity for a profound change and
the debate that's happening among the people who are closer
to this technology. Many of them are like absolutely pumped
about what might be coming, and they don't want us
to let.

Speaker 10 (33:45):
Our fears dictate our decisions.

Speaker 12 (33:48):
And they will point out that the reason that most
sci fi movies about AI are scary is because that's
an easier, better movie. A sci fi movie where everything
good and nice happens on the other side of AI
is just not that dramatic. It's not going to sell
many tickets, and they're trying to remind us that, of
course change is scary and fearful, but we could be
free of these screens very soon and have a different

(34:11):
relationship to technology that the algorithm running TikTok right now
that is kind of an AI, but that's not an
intelligent AI.

Speaker 10 (34:21):
That is a manipulative AI.

Speaker 12 (34:23):
Imagine instead that you're in conversation with something that is
more intelligent than Albert Einstein, that is attuned to helping
you achieve the goals of your day. I'm not saying
it's going to happen, but I will say that we
want to balance out the serious risks that are coming
our way with the fact that it might be a

(34:45):
profoundly better world. And even when it comes to jobs,
I know we like our jobs. Most people don't like
their jobs. Many people work jobs that are hard, that
are dangerous, that are meaningless. Those people would feel liberated
in many cases to no longer have to do that job,
to find another means of survival outside of spending so

(35:09):
much of their lives toiling away at work that they
don't really love. And you know, I don't know if
it's going to happen, but that's what the technologists are
talking about. That is the future that they believe they're
ushering us into.

Speaker 2 (35:22):
By the way, Tolliver that I know those were fighting
words when he said that TikTok was not an intelligent AI.
Because Tolliver does love his TikTok so.

Speaker 3 (35:30):
And I do kind of like AI. Don't tell anyone.

Speaker 2 (35:34):
Yeah, let's go to Joe, who's calling from Saint Louis, Missouri. Joe,
Welcome to the middle. Go ahead, with your thoughts about AI.

Speaker 7 (35:43):
Hi, Yeah, hey, thanks for having me.

Speaker 20 (35:45):
I just wanted to say, like, I I, you know,
aside from being excited or scared, I think a lot
of people just misunderstand and ascribe human motivations to something
that's so structurally different than us that we really can't
even begin to predict what would motivate an AI intelligence?

Speaker 2 (36:03):
Well, what do you think will happen?

Speaker 8 (36:04):
Then?

Speaker 19 (36:05):
So?

Speaker 20 (36:05):
Like, I mean, like, so you know, biologically we're driven
by certain motivators, like you know, land and resources and
stuff like that, and like, like what would motivate an AI?

Speaker 8 (36:15):
Like?

Speaker 20 (36:15):
So I read a brilliant book about it called Stories
of Ibis, where like, ultimately they were motivated by knowledge,
so they sought to explore or whatever, right, Like, But
trying to predict the motivations of AI by ascribing human
motivations to it, I think is flawed from the beginning.

Speaker 2 (36:34):
Very interesting, Joe Gregory, did you get into any of that?
Do we know anything about what might motivate AI? Or
is it just things that humans are inputting that would
motivate AI?

Speaker 11 (36:45):
Now this is such he gets it. The caller gets
it's such a deep philosophical question and just to say
it's simply There are many who say the AI doesn't
have goals in the sense that we have goals to
eat and proprocreate, but it is modeled after human like agency, right,
and you talk about you know, you hear this thing

(37:05):
about agentic systems. That's the new trend in AI to
have these systems that not only can do one thing,
but they can go off and do a week's worth
of work for you and do a whole job, or
a book a plane ticket, or go beyond that. So
once we're copying human goal pursuing behavior, once we're copying
that behavior, you are also the theory is copying human flaws,

(37:30):
which include, you know, selfishness and deception and power seeking
and now scaled up with superhuman competence. So that's the concern,
is that because we've modeled it off of humans, it's
that is that is the intelligence it is copying.

Speaker 3 (37:47):
Can I get a quick question in Jeremy's that I've
always I've been wondering this for a while, So is
there any appetite for like consumer grade something on your
phone where you can detect AI, because you know, I'm screening,
I'm looking at reels and listen to songs, and I
personally would like to know that would give me like
a lot of piece to know that this is AI,
is that in development or.

Speaker 12 (38:05):
Sam Altman talks about this pretty regularly. It's one of
the regulations that he's called for that if if there's
an image or a video that's posted online, to find
a watermark, to find some way to signal it. And yet,
not long into Sora, the newest app from open Ai,

(38:26):
they removed.

Speaker 10 (38:27):
The watermark because people didn't like it.

Speaker 12 (38:30):
And I do think that that's going to be something
that we're going to have to navigate in the short term.
There are people who are engaged in that kind of
discussion inside these labs and in.

Speaker 10 (38:43):
The US government. I do think though, that the.

Speaker 12 (38:47):
Larger piece of it, though, is this blurry line that
happens between us and IT. It is learning from our data,
we are training it. We are the ones who are
going to be using it, and already we're seeing signs
that people are using it are more than they were
using the internet when the Internet was this old. Right,
we're integrating into our society so much faster. And what

(39:08):
is the line like when I write an email and
I use spell check. I don't alert you to that, right,
What is the line when it comes to this dance
between us and it and what is authored by a
human and what is authored by this quote unquote artificial
intelligence trained off of us and what we know. It's

(39:30):
a deep philosophical question, and it's one of the reasons
that these AI labs employ philosophers and are recruiting philosophers
to help them think through all these you know, thorny issues.

Speaker 2 (39:43):
You know, it's interesting because I feel like it wasn't
all that long ago when I would ask people if
they were using AI, and only a handful of people were,
And now I feel like everyone I talked to is
using some sort of AI, and actually very few people
are not using I mean, I'm sure there are plenty
of people who aren't, but I'm just noting seeing anecdotally

(40:04):
that more people are using it. Let's get to a
couple more calls here. Jacob is calling from Tampa, Florida. Jacob,
are you afraid about AI or excited about it?

Speaker 21 (40:13):
No, I'm extremely excited. I was a nineteen year old
high school dropout felon in nineteen ninety eight when I
got into the advertising industry selling media, and it was
right at the cutting edge of the Internet becoming widely available.
And it gave me, the people that I've worked with
over the last almost thirty years, unprecedented access to information,

(40:37):
contact interaction. We were able to learn new things, expand
out businesses over and over. I've been part of six
very successful startups because of it. And my hope and
my expectation is, and you know, the experience I've had
with it, just briefly over the past year and a
half or so, is that the AI tools that I've
been using, my friends have been using, has done the

(40:58):
same thing. It's allowed us to activate ideas, to take
asher on things that we wouldn't necessarily been able to
do because the skill set that is necessary sory for
it would have been too expensive to time consuming and
slowed us down. So I'm extraordinarily excited and hope that
my children are able to have these fantastic tools and
do things I can't even imagine right now.

Speaker 2 (41:19):
I appreciate the call, Jacob, I'm glad you're excited about it.
And I have to say, Gregory, I'm surprised by the
amount of people who have called in so far and
are excited, not scared.

Speaker 11 (41:30):
I mean, I don't think it's just the United States.
I mean not a number of humanitarians. I was an
international correspondent for many years. A number of my friends
and Kenya and other places, and who do a lot
of work on big global issues are very excited about
it because they feel like they're fighting fires and nothing's changing.
I mean, look, there's a great thought experiment by the

(41:52):
philosopher Stuart Russell. I think we've been thinking about philosophers
who says, imagine we got a message from a super
intelligent alien species and they said, we are fifty years
I think it's fifty years, right, We are fifty years away.
Get ready, So.

Speaker 10 (42:08):
Coming to your planet.

Speaker 11 (42:09):
We're coming to your planet. We're fifty years away. Get ready.
What would we do if we knew we had fifty
years to get ready for super intelligence to come. One
thing I think I would hope we would not do
is each make our own decision individually about what to
do about our job. Maybe some of us would learn
the alien language so become translators. You know, everybody would

(42:29):
have their own solution, as opposed to a collective society level.
A conversation about well, okay, well, what should we do?
I'm not saying the answer is easy, but I think
it would involve all of us.

Speaker 2 (42:42):
I want to get to one more call before we
close out the hour. Addison is in Ipsilani, Michigan. Addison,
are you scared or excited?

Speaker 7 (42:50):
Thanks so much taking my call on or to be
on the show. Frankly, giving this conversation, I'm even more
frightened than I was at the start. I've already seen
real word effects. I have friends that work in creative
space that have lost opportunities and have seen their work
been used in other ways that they didn't consent to.
And just recently, there's been two large data centers planned
in my community that are going to jack up electricity

(43:11):
rates and potentially affect the groundwater. I live in Michigan
and water is one of our most important natural resources.
And outside of what I see right now, I'm just
worried that we're sleepwalking into some kind of surveillance state.
We already have the tools, and I worry that AI
is just going to make them more effective.

Speaker 2 (43:27):
Well, Addison, I'm glad you brought up how much water
and electricity AI is using, because time and again we
hear from people on the show saying that they're really
worried about how many resources AI is using. Andy, what
do we know about that and how much is being used?
And is it going to require even more resources in
the future.

Speaker 12 (43:48):
I don't know if people even can wrap their minds
around the amount of resources that we are pumping into
the creation of this artificial intelligence.

Speaker 10 (43:59):
It is drinking up lakes. It is. It is.

Speaker 12 (44:03):
There's no comparison to the amount of energy that is
going to need. In fact, I recently was talking with
an AI researcher at Google who believes that they're creating
something far more like a god than a product, and
I asked him what are the limits, like, what are
the things that would stop you? And he said, he said,
we may need all of the fossil fuel.

Speaker 10 (44:26):
We may need it all like, we may need.

Speaker 12 (44:28):
That much energy to create this thing, and they believe
that it could be good.

Speaker 10 (44:32):
Now, he might have been being.

Speaker 12 (44:33):
Hyperbolic, you know, this was just an off the cuff conversation,
but I do think it is.

Speaker 10 (44:38):
It is.

Speaker 12 (44:40):
That's one of the reasons that Greg and I made
the last invention. That one of the reasons that we're
trying to get people to join the conversation and the
debate is because this is affecting our world now. This
is absolutely going to affect our world in the future.
We don't know how big that effect will be, but
it already is shaping up to be absolutely profound. And
so the time to join the conversations now.

Speaker 2 (45:02):
Well, and we thank all of our callers for joining
the conversation this hour. I want to thank my guest
journalist Andy Mills and Gregory Warner. Their podcast is called
The Last Invention, available wherever you get your podcast. Guys,
thank you so much for coming on the Middle.

Speaker 10 (45:14):
Thank you, thanks everybody for calling.

Speaker 11 (45:15):
Yeah, great to be here.

Speaker 2 (45:16):
Superating and next week we are going to be exploring
the philosophical middle, what it means, what we can learn
from it, and how it can improve our politics and
our daily lives.

Speaker 3 (45:26):
Head on over to listen to the Middle dot com
to join the conversation, and subscribe to the Middle wherever
you get your podcasts so you don't miss a single episode.

Speaker 2 (45:34):
The Middle is brought to you by Long Nook Media,
distributed by Illinois Public Media in Arbana, Illinois, and produced
by Harrison Patino, Danny Alexander, Sam Burmis DAWs, John Barth,
Anika Deshler, and Brandon Kondritz. Our technical director is Steve mork.
Thanks to our satellite radio listeners, our podcast audience, and
the hundreds of public radio stations making it possible for
people across the country to listen to the Middle, I'm

(45:56):
Jeremy Hobson and I will talk to you next week.

Speaker 21 (46:01):
Wid the Scots
Advertise With Us

Host

Jeremy Hobson

Jeremy Hobson

Popular Podcasts

Las Culturistas with Matt Rogers and Bowen Yang

Las Culturistas with Matt Rogers and Bowen Yang

Ding dong! Join your culture consultants, Matt Rogers and Bowen Yang, on an unforgettable journey into the beating heart of CULTURE. Alongside sizzling special guests, they GET INTO the hottest pop-culture moments of the day and the formative cultural experiences that turned them into Culturistas. Produced by the Big Money Players Network and iHeartRadio.

Crime Junkie

Crime Junkie

Does hearing about a true crime case always leave you scouring the internet for the truth behind the story? Dive into your next mystery with Crime Junkie. Every Monday, join your host Ashley Flowers as she unravels all the details of infamous and underreported true crime cases with her best friend Brit Prawat. From cold cases to missing persons and heroes in our community who seek justice, Crime Junkie is your destination for theories and stories you won’t hear anywhere else. Whether you're a seasoned true crime enthusiast or new to the genre, you'll find yourself on the edge of your seat awaiting a new episode every Monday. If you can never get enough true crime... Congratulations, you’ve found your people. Follow to join a community of Crime Junkies! Crime Junkie is presented by audiochuck Media Company.

Stuff You Should Know

Stuff You Should Know

If you've ever wanted to know about champagne, satanism, the Stonewall Uprising, chaos theory, LSD, El Nino, true crime and Rosa Parks, then look no further. Josh and Chuck have you covered.

Music, radio and podcasts, all free. Listen online or download the iHeart App.

Connect

© 2025 iHeartMedia, Inc.