All Episodes

August 7, 2025 10 mins
The provided source article, "AI: Will It Outsmart Us? Experts Divided on the Future of Intelligence" discusses the evolving debate surrounding Artificial Intelligence (AI), specifically its potential to surpass human intelligence. It highlights the varied predictions among experts on when this "Singularity" might occur, ranging from optimistic views by AI company leaders to more cautious outlooks from superforecasters. The article also examines the "scaling hypothesis" – the belief that increased computational power will lead to advanced AI – while acknowledging the limitations of current AI in areas like emotional intelligence and true creativity. Finally, it outlines significant concerns associated with AI's rapid progress, such as job displacement, misinformation, and existential risks, emphasizing the urgent need for regulation and safety research. Read the article here
Mark as Played
Transcript

Episode Transcript

Available transcripts are automatically generated. Complete accuracy is not guaranteed.
Speaker 1 (00:00):
Welcome to the deep dive. Today. We're tackling, well, probably
one of the biggest questions out there right now, is
artificial intelligence actually going to, you know, surpass human intelligence.
It's moved way beyond just sci fi, hasn't it. It's
really shaping things now.

Speaker 2 (00:16):
Oh absolutely. And this idea, the singularity AI getting smarter
than us, it brings up well, a lot of excitement
for some but frankly quite a bit of unease for others.

Speaker 1 (00:27):
Yeah, you can feel that tension.

Speaker 2 (00:28):
But what we can't deny is what AI can already do.
I look at it's learning capabilities, how it's improving accessibility,
the breakthroughs in medicine, it's already pretty remarkable.

Speaker 1 (00:37):
Definitely sets the state it does.

Speaker 2 (00:39):
A very complex state for what's coming next, which is what.

Speaker 1 (00:41):
We want to unpack exactly. So our mission here is
to weidhe through a whole stack of expert views on this.
We want to understand the different takes, from the super
optimistic to the well much more cautious. The goal isn't
just to list predictions, but to get why they differ
so much so you can get a clear picture of
where things might be heading. Makes sense, right, so let's

(01:04):
jump into the big one when, when, or maybe if
AI hits human level intelligence. The predictions are kind of
all over the map, they really are.

Speaker 2 (01:15):
It's fascinating if you look at, say the leaders of
major AI companies, figures like Shane leg at DeepMind or
Sam Altman at open Ai. Quite a few or pretty
bullish bullish meaning I meaning they see it happening soon.
Some suggest a fifty to fifty chance of AGI artificial
general intelligence within maybe the next four or five years,
like twenty twenty eight isn't out of the question for them.

Speaker 1 (01:36):
Wow, okay, that's incredibly soon.

Speaker 2 (01:38):
It is. But then you contrast that with a big
survey from late twenty twenty three they ask nearly twenty
eight hundred AI experts, a broader group exactly, and their
view more conservative. They put the fifty percent chance for
high level machine intelligence or HLMI at twenty.

Speaker 1 (01:54):
Forty seven, so quite a bit further out, still within
many of our lifetimes, though right.

Speaker 2 (01:58):
And interestingly, that twenty forty seven date is actual sooner
than what a similar survey found back in twenty twenty
two they predicted twenty sixty. Then, so even the more
cautious group sees the pace picking up.

Speaker 1 (02:06):
That acceleration itself is significant.

Speaker 2 (02:09):
It is. And even that twenty twenty three survey gave
a ten percent chance of HLMI by twenty twenty seven,
so a small chance of it happening very soon is
still kind of on the table for them too.

Speaker 1 (02:19):
Okay, So company leaders thinking maybe twenty twenty eight, broader
experts leaning towards twenty forty seven. What about other groups.

Speaker 2 (02:26):
Well, then you have the super forecasters. These are people
known for making really accurate predictions on well all sorts
of things, right.

Speaker 1 (02:33):
They have a track record.

Speaker 2 (02:34):
They do, and they are the most cautious. A tournament
in twenty twenty two showed they thought there was only
get this, a one percent chance of AGI by twenty
thirty one percent, okay, yeah, and only twenty one percent
by twenty fifty. They see a seventy five percent chance
happening by twenty one hundred, so a much much longer timescale.

Speaker 1 (02:52):
So we've got timelines from like five years to maybe
eighty years or even longer. What does this huge range
tell us? I mean, how are we listening to this?
It's supposed to make sense of it.

Speaker 2 (03:02):
That's a million dollar question. Isn't it. It highlights massive uncertainty,
but it also hints that maybe people aren't even talking
about the same thing when they say human level intelligence.

Speaker 1 (03:10):
Ah, the definition problem.

Speaker 2 (03:11):
Exactly what does that actually mean for a machine? Is
it passing a certain test? Is it creativity? Is it consciousness?
The definition itself is really slippery.

Speaker 1 (03:21):
So these different timelines might reflect different finish lines in a.

Speaker 2 (03:25):
Way precisely, and different beliefs about how we get there,
which brings us to the whole scaling idea.

Speaker 1 (03:30):
Right, the scaling hypothesis explain.

Speaker 2 (03:32):
That a bit sure, So this is a core belief
for many people actually building these large AI models. The
basic idea is that if we just keep making the
models bigger, more data, more computing power, will eventually get
to AGI.

Speaker 1 (03:47):
Just scale it up, more is better.

Speaker 2 (03:49):
Essentially, Yes, they see it almost like an engineering inevitability.
Keep adding compute, keep adding data, and intelligence will sort
of emerge.

Speaker 1 (03:57):
And what's the evidence they point to? Why do they
believe scaling alone is the path?

Speaker 2 (04:01):
Well, they point to the progress we've already seen. Think
about large language models llms. As they've gotten bigger, fed
more data, trained with more compute, they've developed surprising abilities abilities.

Speaker 1 (04:13):
They weren't explicitly programmed for exactly.

Speaker 2 (04:16):
These emerging capabilities, things like translation, coding, even some forms
of reasoning, just appeared as the models got bigger. It
looks like a predictable relationship. More scale equals better performance,
maybe even new kinds of performance.

Speaker 1 (04:31):
So the trend line looks promising if you just keep extending.

Speaker 2 (04:34):
It, that's the argument. But and this is a big butt,
critics raise a really important counterpoint.

Speaker 1 (04:39):
Okay, what's the purse back?

Speaker 2 (04:40):
The pushback is, just because an AI can generate text
or images that look incredibly human like, does that mean
it actually understands or reasons like a human?

Speaker 1 (04:51):
The difference between mimicry and actual thought.

Speaker 2 (04:54):
You got it. Critics argue these models are still fundamentally
sophisticated pattern matching machines. They might be predicting the next
word or pixel brilliantly based on vast amounts of data,
but do they have common sense? Do they understand causality
or consciousness?

Speaker 1 (05:10):
Things that seem core to human intelligence?

Speaker 2 (05:12):
Right, The critics suggests that maybe just scaling up the
current approaches won't be enough. Maybe we need fundamentally new ideas,
new architectures, not just bigger versions of what we have now.

Speaker 1 (05:21):
That makes sense. It's not just about doing calculations faster
or having access to more facts. Human intelligence feels different.

Speaker 2 (05:29):
It does, and that brings us to what AI can't
do well, even the best systems we have today. It
reminds us intelligence isn't just chess or go where AI
is superhuman.

Speaker 1 (05:40):
So what are those key areas where humans still clearly
have the edge.

Speaker 2 (05:45):
Well, one huge one is what researchers call one shot learning.
Think about a child. You show them one picture of,
say a zebra. They often get it immediately and can
recognize other zebras.

Speaker 1 (05:55):
Right, They don't need thousands of photos.

Speaker 2 (05:57):
Exactly, but current AI models typically need massive data sets thousands,
sometimes millions of examples to learn a new concept reliably.
That ability to generalize from very little data is profoundly human.

Speaker 1 (06:09):
Okay, one shot learning? What else?

Speaker 2 (06:11):
Emotional intelligence? This is a big one. AI just doesn't
have emotions. It can maybe recognize or simulate them based
on text patterns, but there's no genuine empathy, no understanding
of complex social cues, no sportsmanship, no real feelings.

Speaker 1 (06:23):
It's processing, not feeling.

Speaker 2 (06:25):
Precisely and Then there's creativity, real spontaneous creativity. AI can
generate art or music in the style of human artists. Sure,
and sometimes it's impressive.

Speaker 1 (06:36):
We've seen those images and songs.

Speaker 2 (06:38):
Yeah, But can it conceive of and write, say, a
truly original, groundbreaking play that wins awards? Can it experience
genuine unprompted joy or inspiration that leads to a novel
idea that seems uniquely human?

Speaker 1 (06:53):
Still, so it's learning emotion, creativity, these deeper.

Speaker 2 (06:57):
Aspects, right, and exploring these gaps actually force us to
think harder about what we mean by intelligence in the
first place. It's not as simple as we.

Speaker 1 (07:04):
Might think, which, given the speed of progress, leads us
to the concerns. Okay, let's unpack this. What are the
real worries keeping experts up at night? Because you hear
a lot about the potential downsides.

Speaker 2 (07:13):
Oh, absolutely, the concerns are serious and widespread. Job displacement
is a major one, obviously, the fear that AI could
automate not just manual labor, but cognitive tasks across many industries.

Speaker 1 (07:24):
And not just replaced, but maybe eliminate huge categories of work.

Speaker 2 (07:27):
That's the worry. Then there's the whole issue of misinformation
and deep fakes. That survey we mentioned an incredible eighty
nine percent of those AI experts were substantially concerned about
AI generated.

Speaker 1 (07:38):
Deep fis eighty nine percent. That's almost everyone.

Speaker 2 (07:42):
It's a huge consensus. Think about the potential impact on trust,
on democracy. If we can't easily tell what's.

Speaker 1 (07:48):
Real online scary stuff, what else?

Speaker 2 (07:51):
Well, another significant worry cited by seventy three percent of
the experts is AI empowering dangerous groups or individuals. How So,
for example, using A to help design new weapons, engineer
biological threats like viruses more easily, or launch devastating cyber attacks,
giving powerful tools to those with bad intentions.

Speaker 1 (08:09):
Okay, that's deeply concerning.

Speaker 2 (08:11):
And then there's the existential risk question, the idea that
a super intelligent AGI, if its goals aren't perfectly aligned
with ours, could pose a fundamental threat.

Speaker 1 (08:20):
The Skynet scenario basically, well.

Speaker 2 (08:22):
It's often framed more subtly than that, but yes, the
potential for extremely bad outcomes. In one survey, the median
Expert estimate gave a five percent chance that AGI could
lead to outcomes as bad as human extinction.

Speaker 1 (08:33):
Five percent, even if it's low, it's not zero, and
the stakes are everything exactly.

Speaker 2 (08:40):
It's a low probability perhaps, but incredibly high impact scenario,
and that possibility, however, remote is taken seriously by many
sober experts.

Speaker 1 (08:48):
Which explains the urgent calls for you know, guard rails,
for regulation and safety measures.

Speaker 2 (08:54):
Absolutely, these aren't just abstract warriors. They're driving concrete proposals,
things like significquantly boothing investment in AI safety research, trying
to figure out how to build AI that's verifiably safe
and aligned with human values.

Speaker 1 (09:07):
Trying to solve the alignment problem before we get super intelligence.

Speaker 2 (09:10):
That's the idea. Also calls for mandatory safety testing and
audits for powerful AI systems before they're deployed widely, and
a really crucial piece international coordination, getting companies and countries
to work together on safety.

Speaker 1 (09:23):
Standards because the tech itself is global it is, and.

Speaker 2 (09:26):
The competition is fierce. Jeffrey Hinton, or of the godfathers
of deep learning, made waves when he left Google. He
basically warned that the race between tech giants to build
and deploy ever more powerful AI might be, in his words,
impossible to stop.

Speaker 1 (09:40):
A race without breaks.

Speaker 2 (09:41):
That's the fear. It puts immense pressure on policy makers
and the companies themselves to proactively manage the risks because
the competitive drive is just so strong.

Speaker 1 (09:51):
Wow. Okay, So winding this up, we've covered a lot,
the incredible potential, these really profound risks, and this huge
disagree among experts about how fast it's all happening.

Speaker 2 (10:02):
Yeah, the spectrum is vast.

Speaker 1 (10:03):
And understanding that spectrum, the why behind the different views
feels really crucial for anyone trying to navigate what's coming.
It's not about picking a date maybe, but understanding the
forces at play.

Speaker 2 (10:14):
I think that's right. Yeah, And maybe it leaves you,
the listener with the final thought to chew on. Given
everything we've discussed, the different timelines, the things AI can't
do yet, the very definition of intelligence, what does it
truly mean for a machine to outsmart us? Is it
just speed? Is it knowledge? Or is it those uniquely
human things like wisdom, empathy, creativity? And how might our

(10:36):
own ideas about intelligence have to change as AI keeps evolving?

Speaker 1 (10:40):
And crucially, what part do we all play? What's our
responsibility as individuals and as a society in shaping how
this incredibly powerful technology

Speaker 2 (10:49):
Unfolds big questions ones will likely be grappling with for
a long time to come.
Advertise With Us

Popular Podcasts

Stuff You Should Know
My Favorite Murder with Karen Kilgariff and Georgia Hardstark

My Favorite Murder with Karen Kilgariff and Georgia Hardstark

My Favorite Murder is a true crime comedy podcast hosted by Karen Kilgariff and Georgia Hardstark. Each week, Karen and Georgia share compelling true crimes and hometown stories from friends and listeners. Since MFM launched in January of 2016, Karen and Georgia have shared their lifelong interest in true crime and have covered stories of infamous serial killers like the Night Stalker, mysterious cold cases, captivating cults, incredible survivor stories and important events from history like the Tulsa race massacre of 1921. My Favorite Murder is part of the Exactly Right podcast network that provides a platform for bold, creative voices to bring to life provocative, entertaining and relatable stories for audiences everywhere. The Exactly Right roster of podcasts covers a variety of topics including historic true crime, comedic interviews and news, science, pop culture and more. Podcasts on the network include Buried Bones with Kate Winkler Dawson and Paul Holes, That's Messed Up: An SVU Podcast, This Podcast Will Kill You, Bananas and more.

The Joe Rogan Experience

The Joe Rogan Experience

The official podcast of comedian Joe Rogan.

Music, radio and podcasts, all free. Listen online or download the iHeart App.

Connect

© 2025 iHeartMedia, Inc.