All Episodes

April 23, 2025 47 mins

This week, Nate and Maria discuss AI 2027, a new report from the AI Futures Project that lays out some pretty doom-y scenarios for our near-term AI future. They talk about how likely humans are to be misled by rogue AI, and whether current conflicts between the US and China will affect the way this all unfolds. Plus, Nate talks about the feedback he gave the AI 2027 writers after reading an early draft of their forecast, and reveals what he sees as the report’s central flaw.

Enjoy this episode from Risky Business, another Pushkin podcast.

The AI Futures Project’s AI 2027 scenario: https://ai-2027.com/


Get early, ad-free access to episodes of What's Your Problem? by subscribing to Pushkin+ on Apple Podcasts or Pushkin.fm. Pushkin+ subscribers can access ad-free episodes, full audiobooks, exclusive binges, and bonus content for all Pushkin shows.

Subscribe on Apple: apple.co/pushkin
Subscribe on Pushkin: pushkin.com/plus

See omnystudio.com/listener for privacy information.

Mark as Played
Transcript

Episode Transcript

Available transcripts are automatically generated. Complete accuracy is not guaranteed.
Speaker 1 (00:15):
Pushkin.

Speaker 2 (00:20):
Hey, it's Jacob.

Speaker 1 (00:21):
There's another Pushkin show called Risky Business. It's a show
about decision making and strategic thinking. It's hosted by Nate
Silver and Riya Kanakova, and they just had a really
interesting episode about the future of AI. So we're going
to play that episode for you right now. We'll be
back later this week with the regular episode of What's

(00:42):
Your Problem.

Speaker 3 (00:48):
Welcome back to Risky Business, a show about making better decisions.
I'm Maria Kanakova.

Speaker 2 (00:53):
And I'm Nate Silver.

Speaker 3 (00:54):
Hey. Today on the show is going to be a
little bit doomtastic.

Speaker 2 (00:58):
Yeah, I mean, I don't know if it's like worse
and thinking about like the global economy going into a
recession because of dumb fuck tariff policies about how we're
all going to die in seven years and staid no,
I'm just kidding. This is a very very intelligent and
well written and thoughtful report called AI twenty twenty seven
that we're to spend the whole show on I because

(01:20):
think it's such an interesting to talk about, but that
you know, includes some dystopian possibilities. I would say it does.

Speaker 3 (01:28):
Indeed, So let's get into it and hope that you
guys are all still here to listen to us in
seven years.

Speaker 2 (01:38):
The contrast is interesting between like all the chaos we're
seeing with tariff policy in terms of starting a trade
war with China and then other types of chaos. It's
interesting to kind of look at this. I mean, I
wouldn't call it a more optimistic future exactly, like but
like on a different trajectory of like a future that's

(02:00):
going to change very fast, according to these authors, with
profound implications for you know, everything, the humans species. These
researchers and authors are saying that everything is going to
change profoundly.

Speaker 3 (02:15):
And.

Speaker 2 (02:18):
Even though there is some hedging hair, this is kind
of their base case scenario. And like you know, base
case number one and base case number two differ. There's
like kind of a choose your own adventure at some
point in this report, but they're both very different than
the status quo, right, and the notionately hear from that,
like everything becomes different If ais become substantially more intelligent

(02:42):
than human beings. People can debate and we will debate
on this program what that means. But yeah, do you
want to do you want to contextualize this for me?
Do you want to tell people too. Yeah, there's are
of this report.

Speaker 3 (02:50):
Absolutely absolutely. So the report is authored officially by five people,
but I think unofficially there's also a sixth. We've got
Eli Leiflund, who's a super forecaster and he was ranked
first on Rand's forecasting initiative, so he is someone who
is very good at kind of looking at the future

(03:13):
trying to predict what's going to happen. You have Jonas Walmer,
who's a VC at Macroscopic Ventures. Thomas Larson was the
former executive director of the Center for AI Policy, so
that is a center that advises both sides of the
aisle on how AI is going to go. And Romeo Dean,

(03:34):
who is part of Harvard's AI Safety Student Team, so
someone who is still a student, still learning, but kind
of the next generation of people looking at AI. And
finally we have Daniel Cochtaylo, who basically had written a
report back in twenty twenty one. He was a AI

(03:57):
researcher and he looked at predictions for AI for twenty
twenty six and it turns out that his predictions were
pretty spot on, and so Open AI actually hired Daniel
as a result of this report.

Speaker 2 (04:10):
Obviously he left, yes, and.

Speaker 3 (04:12):
Then and then he left exactly.

Speaker 2 (04:14):
And importantly, So there's also Scott Alexander exactly.

Speaker 3 (04:17):
So I was saying, he's the he's the person who
kind of is in the background, and he's you guys
might know him as the author behind astral Codex.

Speaker 2 (04:27):
And I know Scott. He's one of the kind of
fathers of what you might call rationalism. I think Scott,
when I interviewed him from a book, was happy enough
with that term and accused me or co opted me
into also being a rationalist. These people are somewhat adjacent
to the effective altruist, but not quite right. They're just

(04:48):
trying to apply a sort of thoughtful, rigorous, quantitative lens
to big picture problems, including existential risk, of which most
people in this community believe that AI is both an
existential risk and also kind of an existential opportunity, right,
that it could transform things. You talk to Sam Alp
and he'll say, we're going to cure cancer and elimina

(05:09):
poverty and whatever else. Right. And Scott's also an excellent writer.
And so let me disclose something which is slightly important here.
So I actually was approached by some of the authors
of this report a couple of months ago, I guess
it was in February ish, just to give feedback and
chat with them. So I'm working off the draft version right,

(05:29):
which I do not leave. They changed very much, so
my notes pertained to an earlier draft. I did not
have time this morning to go back and re.

Speaker 3 (05:35):
Read it, so I was not on the inside loops.
So I did not get an earlier draft. And I've
read this draft and basically just to kind of big
picture sum it up, it outlines two scenarios, right, two
major scenarios for how AI might change the world as

(05:56):
soon as twenty thirty. Now important note like that that
date is could have hedged. It might be sooner, it
might be later. There's kind of a there's a confidence
interval there. But the two different scenarios. One basically we
do twenty thirty, humanity disappears and is taken over by AI.

(06:18):
The positive report is in twenty thirty basically we get
AIS that are aligned to our interests, and we get
kind of this AI utopia where AIS actually help make
life much better for everyone and make the standard of
living much higher. But the crucial turning point is before
twenty thirty. And the crucial kind of question at the

(06:39):
center of this is will we be able to design
AIS that are truly aligned to human interests rather than
just appear to be aligned and kind of lying to
us while actually following their own agenda. And how we
handle that is kind of the lynchpin. And it's actually interesting,

(07:01):
Nate that you started out with China, because a lot
of the policy choices and a lot of what they
see as kind of the decision points will affect the
future of humanity actually hinge on the US China dynamic,
how they compete with each other, and how that sometimes
might basically clash against safety concerns because no one wants

(07:24):
to be left behind. Can we manage that effectively and
can kind of that transition work in our favor as
opposed to against us. I think that this is kind
of one of the big questions here, And so it's
funny that we're seeing all of this trade war right
now as the sideport is coming out.

Speaker 2 (07:41):
Yeah. Look, I think this exercise is partly just a
forecasting exercise, right, I mean, obviously it's just kind of
like fork at the bottom where we learn to have
an AI slow down or we kind of are pressing
fully on the accelerator, right, Like, in some ways, scenarios
are like not that different, right, either one assumes remarkable

(08:05):
rates of technological growth that I think even AI I'm
never quite sure who to call an optimist or a pessimist, right,
even AI believers you know, might think is a little
bit aggressive. Right, But what they want to do is
they want to have like a specific, fleshed out scenario
for how the world would look like. It's kind of
like a modal scenario. And like, I think they'd say that, like,

(08:29):
we're not totally sure about either of these necessarily, right,
And I don't think they'd be as like pedantic as
to say, if you do X, Y and Z, then
we'll save the world and have utopia, and if you don't,
then we'll all die, Right. I think they'd probably say
it's unclear and there's kind of like risk either way.
We wanted to go through the scenario of like fleshing
out like what the world might look like, right. I

(08:51):
do think one thing that's important is that whatever decisions
are made now could get locked in, right that you
pass certain points in overturn and it becomes very hard
to decelerate like an arms race. This is you know
what we found during the Cold War for example. I
mean one of the big things I look at is like,

(09:11):
do we force the AI to be transparent in its
thinking with humans? Right? Like, now there's been a movement
toward the ai'll actually explicate it's thinking more. I'll ask
it a query open AI. The Chinese models to this too, right,
and I'll say, I am thinking about X, Y and Z,
and I'm looking up PD and Q and now I'm
reconsidering this. It's actually has this chain of thought process, right,

(09:33):
which is explicated in English. You know. One concern is
that what if the AI just kind of communicates to
another in these implicit vectors that's inferring from all the
texts it has. It's kind of unintelligible to human beings, right,
and maybe kind of quote unquote thinking in that way
in the first place, and then does us the favor
of like translating back so it it goes from English

(09:55):
to kind of this big bag of numbers is one
a researcher called it, right, and then it translates it
back into English or whatever land which you want. Really,
in the end, what if it just skip cuts out
that last step, right, then we can't kind of like
check what AI is doing. Then it can behave deceptively
more easily. So you know, so that part seems to

(10:17):
be important. I want to hear your your first impressions
before I kind of poison the well too much.

Speaker 3 (10:24):
Well, my first impressions is that the alignment problem is
a very real one and an incredibly important one to solve.
And what I got from this is that actually, the
problem that I've had with like these initial AI lms
is the kernel of what they're seeing there. Right, So
you and I have talked about this on the show
in the past, and I've said, well, my problem is

(10:47):
that when I'm a domain expert, right, I start seeing
some inaccuracies, and I start seeing like places where like
it either just didn't do well or made shit up
or whatever it is. Now, I think it's very clear
that those problems aren't going to go away, right, that
that is going to get much much better. However, the
kernel of it's showing me some thing, but that might

(11:10):
just be you know, I have no way of verifying
if that's what's going on, what it's reading, like, how
it's I don't want to say thinking about it, even
though in the report they do use thinking, but it's normal.

Speaker 2 (11:22):
Vizing too much, I think is correct. I think it's.

Speaker 3 (11:24):
Okay, we'll stick to that language. Yeah, okay, So, so
how it's thinking about it that those little problems and
like the little the glitches and the things that it
might be doing where it starts actually glitching on purpose
are not going to be visible to the human eye.
And so one of the main things that they say

(11:45):
here is that as AI internal R and D gets
rapidly faster, so that means basically AI's researching AIS, right,
and so internally they start developing new models, and as
they kind of surpass human ability to monitor it, it
becomes progressively more difficult to figure out, Okay, is the

(12:07):
AI actually doing what I want to do? Is the
output that it's giving me its actual thought process, and
is it accurate or is it like trying to deceive me?
But it's actually kind of inserting certain things on purpose
because it has different goals, right, because it is actually
secretly misaligned, but it's very good at persuading me that

(12:27):
it's aligned. Because one of the things that actually came
out of this report and I was like, huh, you
know this is interesting is if we get this remarkable
improvement in AI, it will also remarkably improve at persuading us, right,
so as part of making yeah, so this is this
is but I've never even thought about that. I was like, okay, fine,

(12:50):
But one of the things that I do buy is
that it's going to be very difficult for us to
monitor it and to figure out like is it truly
aligned with human wants, with human desires, with human goals,
And the experts who are capable of doing that, I
think are actually going to dwindle right as AI starts
proliferating in society. And so to me, that is something

(13:11):
that is actually quite worrisome, and that is something that
we really need to be paying attention to now. Just
to fast forward a little bit, in their doomsday scenario
in twenty thirty one, AI takes over it basically like
suddenly releases some chemical agents, right, and humanity dies and
the rest of the stragglers are taken care of by drones,
et cetera. I don't even like it's it doesn't even.

Speaker 2 (13:34):
Quick and painless death.

Speaker 3 (13:35):
I will say, let's hope we don't know what the
chemical agents are might not be quick and painless. Some
chemical agents are actually a very painful death thing. So
let's hope let's hope it's quick and painless. Quick, quick, yes,
stuff quick? Okay, hopefully some chemical agents are not quick.
Let's hope it's it's quick and painless. But if they're

(13:58):
actually capable of deception at that high level, then you
technically don't even need them to do it. If we're
trusting medicine and all sorts of things to the AIS,
it's pretty easy for it to actually manipulate something and
actually insert something into codes, et cetera that will fuck
up humanity in a way that we can't that we

(14:19):
can't actually figure out at the moment, right, Like the
way I think of it, And this is not from
the this is not from the paper, but this is
just the way that like my mind processed. It is
like think about DNA, right, Like you have these remarkably complex,
huge strands of data, and as we've found out, but
it's taken forever, one tiny mutation can actually be fatal, right,

(14:43):
but you can't spot that mutation. Sometimes that mutation isn't
fatal immediately but will only manifest at a certain point
in time. That's the way that my mind tried to
kind of try to conceptualize what this actually means. And
so I think that, you know, that would be easy
for a deceptive AI to do and to me like that,
that's kind of the big takeaway from this report is

(15:06):
that we need to make sure that we are building
AIS that will not deceive right that they're capabilities, they
explain them in an honest way, and that honesty and
trust is actually prioritized over other things, even though it
might slow down research, it might slow down other things,
but that that kind of alignment step is absolutely crucial

(15:29):
at the beginning, because otherwise humans are human, right, they're
easily manipulated, And we often trust that computers are quote
unquote rational because they're computers, but they're not. They have
their own inputs, they have their own weights, they have
their own values, and that could just lead us down
a dark path.

Speaker 2 (15:48):
Yeah, so let me follow up with this pushback. I
guess right, Like, first of all, I don't know that
humans are so easily persuaded. This is my big critique
with like all the misinformation people who say, well, misinformation
is the biggest problem that society basis. It's like people
are actually pretty stubborn and they're kind of sound pretentious.

(16:10):
They're kind of basian and how they formulate their beliefs, right,
they have some notion of reality. They're looking at the
credibility of the person who is telling them these remarks.
If it's an unpersuasive source, it might make them less
likely to believe they're balancing what other information with their
lived experience so called Right. You know, part of the
reason that like I am skeptical of AIS being super

(16:33):
persuasive is like you know that it's an AI. You
know it's trying to persuade you. You know what I mean.
Like if you go and play poker against like a
really chatty player, if Phil Hemmi with or Scott Sieber
or someone like that, Right, you know, on some level
of the best play is just to totally ignore it. Right,
You know that they are trying to sweet talk you
into doing exactly what they want you to do, and

(16:55):
so the best play is to disengage or literally you
can randomize your movis videos no notion of what the
game theoretical optimal play might be, right, or salesmen or
politicians have reputations for being Oh he's a little too smooth.
Gavin newsome a little too fucking smooth. Right, I don't
find Gavin news some persuasive at all. Right, use a
little too from the hair gel to the constantly shifting vibes.

(17:19):
I mean, I don't really find Gavin news some persuasive
at all, even though like an AI, I'd say, boy,
Gavin new some good looking guys look gravelly throated. But
you know whatever, I mean. Look, the big critique I
have of this project, and by the way, I think
this is an amazing project in addition to like wonderful
writing if you viewed on the web not your phone.
So these very cool, like little infographic that updates everything

(17:41):
from like the market value. If they don't call it
open aye, they call it open brain. I guess is
what they settle on for a substitute.

Speaker 3 (17:47):
Yeah, they call everything something else, just to make sure
that they're not stepping on any toes. So they have
open Brain, and then they have Deep Scent from China.

Speaker 2 (17:57):
I wonder which one that could be. I wonder. But
it's beautifully presented and written, and like I appreciate they're
going out on a limb here, you know, I mean,
I think they have. It's been fairly well received. They've
gotten some pushback both from inside and I think outside
the AI safety community. Right, but they're putting their necks

(18:18):
in the line, hear. They will look if things look
pretty normal, if looks pretty normal in twenty thirty two
or whatever, right, then they will look dumb for having
published this.

Speaker 3 (18:29):
Well, and they actually have that right as a scenario
that you end up looking stupid if everything goes well.
But that's okay. Now, can I push back on the
persuasion thing a little bit, just on two things. So
first of all, the poker example is not actually a
particularly applicable one here, because you know that you're playing

(18:50):
poker and you know that someone is trying to get
information and deceive you the tricky thing. So this is
kind of when I spend time with con artists. The
best con artists aren't Gavin Newsome, Like, they're not car salesmen.
You have no idea they're trying to persuade you to
do something. They are just like nice, affable people who
are incredibly charismatic, and even in the poker community, by

(19:10):
the way, like some of the biggest grifters who like
it comes out later on, we're just like stealing money
and doing all of these things. Are charming, right, They're
not sleazy looking like they have no signs of, oh,
I'm a salesman. I'm trying to sell you something. The
people who are actually good at persuasion you do not
realize you are being persuaded. And I think people are

(19:31):
incredibly easy to kind of to subtly lead in a
certain direction if you know how to do it, and
I think ais could do that, and they might persuade
you when you don't even think they're trying to persuade you.
You might just ask like, can you please summarize this
research report and the way that it frames it right,
the way that it summarizes it just subtly changes the

(19:53):
way that you think of this issue. We see that
in psych studies all the time, by the way, where
you have different articles presented in slightly different orders, slightly
different ways, and people from the same political beliefs, you know,
same starting point, come away with different impressions of what
kind of the right course of action is or what
this is actually trying to tell you. Because the way

(20:13):
the information is presented actually influences how you think about it.
It's very very easy to do subtle manipulations like that.
And if we're relying on AI on a large scale
for a lot of our lives, I think that if
it has like a quote unquote master plan, you know,
the way that they present in this report, then persuasion
in that sense is actually going to be.

Speaker 2 (20:35):
You'll know you're being manipulated, right, that's the issue.

Speaker 3 (20:37):
No, you don't know. That's the thing.

Speaker 2 (20:39):
People will be manipulated because it's AI and that I
don't but I don't know.

Speaker 3 (20:43):
I honestly, like Nate, I applaud your belief in human's
ability to adjust to this, but I don't know that
they will because I've just seen enough people who are
incredibly intelligent fall for cons and then be very unpersuadable
that they have been conned, right, instead doubling down and
saying no, I have not. So humans are stubborn, but

(21:06):
they're also stubborn and saying I have not been deceived,
I have not been manipulated, when in fact, they have
to protect their ego and to protect their view of
themselves as people who are not capable of being manipulated
or deceived, and I think that that is incredibly powerful,
and I think that that's going to push against your optimism.
I hope you're right, but from what I know, I
don't think you are.

Speaker 2 (21:27):
I'm not quite sure you call it optimism some I
guess maybe we do like different views of human nature.
But like there's not yet a substantial market for like
AI driven art we're writing, and I'm sure there will
be one eventually, right, But like people understand that context matters, right,
that you could heavy I create a rip off the
Mona Lisa, but you can also buy rip off the
Mona Lisa and Canal Street for five bucks, right, and

(21:49):
like it. You know, So it's it's the intentionality of
the act and the context of the speaker. Now, sound
like super woke, I guess right where you're coming from,
A I think that actually is how humans communicate. Like
art that might be pointless drivel coming from somebody can
be something to coming from a Jackson Ballock or whatever.

Speaker 1 (22:10):
You know.

Speaker 3 (22:11):
Absolutely, I think that that's a really important point. By
the way, I think it's a different point, but I
think that that is a very important point. I think
context does matter.

Speaker 2 (22:22):
We'll be right back after this message. I was buttering
up this report before. My big critique of it is like,
where are the human beings in this? Or put another way,

(22:44):
kind of like where is the politics? Right? They're trying
not to use any remotely controversial real names, right, so
you have open brain for example, So where's where is
President Trump? Let me steal a quick search to make
sure their named Trump does not appear to know the name.

Speaker 3 (23:03):
So they do. Actually, I don't know if this existed.
Maybe they took your criticism in this, but they do
have like the vice president and the president like they
do put politicians in this version of the report. They
don't have names, but they say the vice president in
one of these scenarios, you know, handily wins the election
of twenty twenty eight. We have one vice president.

Speaker 2 (23:23):
They have general secretary. I think I'm not sure if
they I mean general secretary. General secretary resembles she yep,
and the vice president kind of resembles jd Vance, Right,
I don't think the president resembles Trump at all.

Speaker 3 (23:40):
Right, it's kind of the no, they didn't character like, yeah,
they tried to side stuff that if.

Speaker 2 (23:45):
We think it's all happening in the next four years then,
you know, presidential politics matter quite a bit. I mean,
I know, I h this is such a fucking you know,
I was jogging earlier on the East Side and I
was listening to the Sraclient interview with Thomas Friedman. It's
such a fucking yuppie fucking thing, right, It's.

Speaker 3 (24:06):
Okay, It's okay, You're allowed to be a.

Speaker 2 (24:08):
YOUNGESTI even not a huge fan of necessarily, but like,
you know, as well versed some like geopolitics and like
China issues, is like, yeah, China, You've just been back
from Chinese. Like, yeah, China's kind of winning, you know
what I mean. And like, I'm not sure how how
Trump's hawkishness on China, but like kind of imbecilically executed

(24:30):
hawkishness on China, Like I'm not sure how that figure's
in to this. Right, If we're reducing US China trade,
that probably does produce an AI slow down, maybe more
for US if we're like if they're not exporting their
like raw earth materials and and so forth, but we're
making it hard for them to get in video chips,
so they probably have like lots of workarounds and things

(24:52):
like that. Maybe maybe Trump tariffs are good. I would
like to ask the authors this report, because it means
that we're gonna like have slower AI progress. And I'm
not joking, right, that increases the hostility between the US
and China in the long run, right, I mean that's
if we were send it all the terrafs. I think
we still permanently or let's not say permanent, let's say
at least for a decade or so, have injured us

(25:16):
standing in the world. And so I don't know how
that figures in. And I'm like, I'm also not sure
like kind of quote what the rational response might be.
But one thing they tried to let me make sure
that they kept us into their report, right, So they
actually have their implied approval rating for how people feel

(25:40):
about open brain, which is there not very self open AI.
I think this actually is some feedback who that they
took into account. Right. They originally had it slightly less negative,
but they have this being persistently negative and then getting
more negative over time. It was a little softer in
the in the previous version that that I saw, so

(26:00):
they did change that one thing on at some stage.
But like the fact that like AI scares people, it
scares people for both good and bad reasons, but I
think mostly for valid reasons, right, that the fear is
fairly bipartisan. That the biggest AI accelerators are now these
kind of Republican techno optimists who are not looking particularly

(26:24):
wise given how it's going with the first ninety days
whatever we are, the Trump administration, and the likelihood of
like a substantial political backlash, right, which could lead to
dumb types of regulations. But like, you know, part of
it too is like okay, AI they're saying can do
not just computer disk jobs, but like all types of things, right,

(26:47):
And like humans kind of play this role initially as supervisors,
and then literally within a couple of years people start
to say, you know what, am I really adding much
value here? Right? You kind of have like these legacy jobs,
and there is a lot of money. I think most
these scenarios imagine very fast economic grow although maybe very lumpy,

(27:11):
right for some parts of the world and not others.
We're kind of just sitting around with a lot of
idle time. It might be good for live poker, Maria. Right,
all of a sudden, all these smart people, their open earth,
their open brain, excuse me, stock is now worth billions
of dollars, right, and like nothing to do because the
AI is doing all their work, right, they have a

(27:31):
lot of fucking time to play some fucking texas hold
them right.

Speaker 3 (27:36):
That is, that is the one way of thinking about it.
Let's go back to your earlier point, which I actually
think is an important one, because obviously they were trying
to do as all you know, super forecasting tries to
do is you try to create a report that will
work in multiple scenarios.

Speaker 2 (27:57):
Right.

Speaker 3 (27:57):
You can't tie it too much to like the present moment,
otherwise your forecasts are going to be quite biased. However,
I do think that what you raise kind of our
current situation with et cetera, has very real implications. Given
that this is kind of the central dynamic of this
report that their predictions are based on, I think that
it's incredibly valid to actually speculate, you know, how will

(28:22):
if at all, this effect the timeline of the predictions,
the possible the likelihood of the two scenarios. And I
will also say that one of the things in the
report is that all of these negotiations on like will
we slow down, will we not? How aligned is it?
This all takes place in secret, right, Like we don't
know the humans don't know that it's going on. We
don't know what's happening behind the scenes, and we don't

(28:45):
know what the decision makers are kind of thinking. And
so for all we know, you know, President Trump is
meeting with Sam Altman and trying to trying to kind
of do some of these things. And it's funny because
we were kind of pushing for transparency in one way,
but there's a lot of things here that are very
much not transparent.

Speaker 2 (29:04):
Yeah, it's kind of the deep state, right, but also
also a lot than negotiations are now AI versus AI, right,
And look, I'm not sure that AIS will have that
trust both with the external actor and internally. I'm skeptical
of that. Right. If that does happen, they kind of
think this might be good because the AIS will probably

(29:25):
behave in like it literally game theory optimal way, right
and understand these things and make I guess, like fewer
mistakes than humans might say, if they're properly aligned.

Speaker 3 (29:38):
Like that's a crucial thing because in the doomsday scenario,
AI negotiates with AI, but they conspire to destroy humanity. Right,
So there are two scenarios. One, it's actually properly aligned,
so AI negotiates with AI. Game theory works out, and
we end up with you know, democracy and wonderful things.
But in the other one, where they're misaligned, AI negotiates

(30:01):
with AI to create a new AI basically and destroy humanity.
So it can go one way or the other depending
on that alignment step.

Speaker 2 (30:09):
First of all, I mean, the utopian didn't seem that
utopian to me, right, I'm not sure that no, it
did it.

Speaker 3 (30:15):
It actually seemed quite distolling to me, Like it seems
incredibly disturbed. It's kind of like, you know, I look,
but at least we're still alive, we'll have chures.

Speaker 2 (30:23):
Will probably live longer. And again, lots of lots of
lots of poker the ail you're writing Silver Bulletin and
and hosting our podcast. Right, let me let me back
up a little bit, because I think we maybe take
for granted that some of these premises are are kind
of controversial, right, so they have a breakpoint. I think
in twenty twenty six, well says Art, why aren't certainly

(30:47):
increased potentially be on twenty twenty six, right, So that's
kind of the breakpoint. Is like twenty seven is this
inflection point? I think I'm using that term correctly in
this context. You know. So I'm reading this report and
up to twenty twenty six and like, thumbs up, yia,
this seems like very smart and detailed about, like, you know,
how the economy is reacting and how politics are reacting

(31:07):
in the in the race dynamic with China. Maybe there
needs to be a little bit more Trump in there.
I understand why politically they didn't want to get into
that mess, right, But like, so there's kind of three
different things here, right. One is a notion of what's
sometimes called AGI, or artificial general intelligence. And if you
ask a hundred different researcher you get a hundred different

(31:28):
definitions of what AGI is. But you know, I think
it is based like being able to do a large
majority of things that a human being could do competently,
something we're limiting it to kind of like desk job
type tasks. Right, anything that can be done remotely is
sometimes definition that is used or through remote work, right,

(31:48):
because clearly AIS are inferior to humans and like sorting
and folding laundryth or things like that, right that require
certain type of intelligence. Right, if you use the kind
of desk job definition, then like AI is already pretty
close to AGI. Right. I use large language models all
the freaking time, and they're not perfect for everything. I

(32:08):
felt like, you know, in terms of like being able
to do the large majority of desk work at levels
ranging from competent intern to super genius, Like on average,
it's probably pretty close to being generally intelligent by that definition, right,
if you're the.

Speaker 3 (32:25):
One using it. I just want to like once again
point that out because one of the things that they
say in the report is that as it gets more
and more involved what we're asking AI to do, it's
like the human process to evaluate whether it's accurate and
whether it's making mistakes will get longer and longer. And
I think they say, like for every like basically one
day of work, it'll take several it's like a two

(32:48):
to one ratio at the beginning for how long it
will take humans to verify the output. Right, So you think, like,
you think you save time by having AI do this,
but if you want it to actually develop correctly, then
you need a team and it takes them twice as
long to verify that. What the AI did is actually
true and actually valid and actually aligned, et cetera, et cetera.
Now you're not asking it to do things that require

(33:09):
that amount of time, but there do need to be
little caveats to how we how we think about their
usefulness and how you are able to evaluate the output
versus Southern scenarios.

Speaker 2 (33:20):
When I use AI the things it's best with our
things that like save any time. Right where I feeded
a bunch of different names for different college basketball teams
were working in our NCAA model, I'm like, take these
seven different naming conventions that are all different and create
a cross reference table of these, which is kind of
like a hard task. You need to have a little

(33:41):
context about basketball. And it did that very well. Right,
That's something I could have done. It might have taken
an hour or two, but you know, instead, instead it
could do it. It's in a few minutes, and it gets faster.
It's like, oh, I've learned from this from you before, Nate,
so now I can be faster doing this type of
task in the future. I was at the poker tournament
down in Florida last week and like, uh, you know,

(34:04):
I ask open research. Excuse me, God, is that like.

Speaker 3 (34:09):
See exactly right. It all because deep research, Deep research,
Deep research, I asked, deep reach, eating too many of
these twenty twenty seven reports, brand gets.

Speaker 2 (34:21):
To pull a bunch of stock market data for me.
And then I'm playing at poker hand and like and
make a really thin sexy value bet with like fourth pair.
No one knows that means right, I bet a very weekend.
I thought the other guy would call it with an
even weaker hand, and I was right. And I feel
like I'm such a fucking stud here valuating fourth pair,
And well AI does the work for me, and then
of course I like bust out of the tournament an

(34:43):
hour later, and meanwhile, you know, deep research bungles this
particular task. But in general AI has been very reliable.
But the point is that like there's like a a
inflection point where like I'm asking you to do things
that like are just a faster version of what I
could do myself. I would at the moment I ask, aid, be, like,
I want you design a new NCA model for me

(35:03):
with these parameters, because like I wouldn't know how to
test it. But anyway, I mean long wanted to hear,
so AGI, we're going to get AGI, or at least
we're going to get someone calling something AGI soon. Right,
artificial super intelligence where it's doing things much better than
human beings. I think this report takes for granted or

(35:25):
not takes for granted. It has lots of documentation about
its assumptions, but it's saying, Okay, this trajectory has been
very robust so far, and people make all types of
bullshit predictions. Of the fact that these guys in particular
have made accurate predictions in the past is certainly worth something,
I think, Right, But they're like, Okay, you kind of
follow the scaling law, and before too much longer, you know,

(35:48):
AI starts to be more intelligent than human beings. You
can debate what intelligent means if you want, but do
superhuman types of things or and or do them very fast,
you thing might be different, right, And and I can
do things very fast. Is maybe it's certainly a component
of intelligence, right, But like, but I don't take for
granted that like quote unquote AI I can reliably extrapolate

(36:12):
beyond the data set. I just think that, like, it's
not an absurdly pavologic It may even be like the
base case are close to the base case, but like
that's not assumable from first principles. I don't think we've
all seen lots of trend charts of the you know,
if you look at a chart of Japan's GDP in

(36:33):
the nineteen eighties, you might have said, Okay, well, Japan's
gonna take over the world, and being people bought this
and now it's kind of hasn't grown for like forty
years basically, right, And so like we've all seen lots
of curves that go up and then it's actually an
estra where the fuck you call it? Where like it
begins to bend the other way at some point and
and we can't tell until later. Right. The other thing
is like the abillity of AI to plan and manipulate

(36:58):
the physical world. I mean some of these things where
they're talking about, like you know, brain uploading and dice
and swarms and nanobots like you know there, I would
literally wager money against this happening on the time scales
that they're talking about. Right, they double the timescale, Okay,
then I might start to give some more probability. And look,

(37:20):
I'm willing to be wrong about that. I guess we'll
I'll be dead anyway, fifty percent likely in this scenario,
but like.

Speaker 3 (37:25):
This scenario fifty percent to do like AA.

Speaker 2 (37:28):
You know, the physical world requires sensory input and lots
of parts of our brain that AI is not as
effective at manipulating. It also requires like being given or
commandeering resources somehow. By the way, this is like a
little bit of a problem for the United States. I mean,
we are behind China by like quite a bit in

(37:52):
robotics and related things, right, So, like, I don't know
what happens if like we have the brainier, smarter AIS,
but like they're very good at manufacturing machinery. So like,
what if we have the brains and they have the
brawn so to speak, right, and they have maybe a
more authoritarian but functional infrastructure. So I don't know what

(38:14):
happens then, right, Like, but the ability of AIS to
comment to your resources to control the physical world, to
me seems far fetched on these timelines, in part because
of politics, right, I mean, the fact that it takes
so long to build a new building in the US
or a new subway station or a new highway, and
the fact that our politics is kind of sclerotic. Right,

(38:38):
And look, I mean I don't want to sound like
too pessimistic, but if you read the book I recommended
last week, fight, I mean, we basically did not have
a fully competent president for the last two years, and
I would argue that we don't have one for the
next four years. Right, So like all these kind of
things that we have to plan for, like who's doing
that fucking planning? Our government's kind of dysfunctional, and you know,

(39:00):
maybe that means we just lose to China, right, Maybe
that means we lose to China, at least we'll have
like nice cars. I guess.

Speaker 3 (39:10):
We'll be back right after this. I think that your
point about the growth trajectories not necessarily being reliable is

(39:30):
a very valid one. The Japanic example is great. You know,
malfusion population growth is another is another big one. Right,
we thought that population would explode and instead we're actually
seeing population decline, So you know, the world does change.
The thing that I think they rely on is that
the AIS are capable of designing just this incredible technology

(39:55):
much more quickly, so that our building process and all
of that gets sped up one hundredfold from what it
is right now. But it's still at least at this
point needs humans to implement it right, and needs all
of these different workers. And so yeah, I think there's
some assumptions built into here that I hope, like I
hope that that timeline isn't feasible, and I do think

(40:16):
that there are things that are holding us back all
the same. I think it's really I think it's interesting.
One of the reasons I like this report is that
it forces you to think about these things right and
try to game out some of these worst case scenarios
to try to prevent them, which I think is always
an important thought exercise. I do want to go back
to kind of their good scenario, which just so bad

(40:38):
scenario is, you know, we're all wiped out by a
chemical warfare that the AI is release on us. Good
scenario is that you know, everyone gets a universal basic
income and AI does everything and no one has to
do anything and we can just live, you know, happily
with Maria at play poker. Yeah, and that just as

(41:00):
you suggest it, that seems actually like a very dystopian
scenario where people can become much more easy to brainwash, control,
et cetera, et cetera. It's like a dumbing town, right
where where we're not challenged to produce good art to
advance in any sort of way. Just to me, it
does not seem like a very meaningful The.

Speaker 2 (41:22):
Question is there's not way another if you read AI
twenty twenty seven, which I highly recommend that you read it.
There's also another post by a pseudonymous poster called L.
Rudolph L who wrote something called A History of the
Future twenty twenty five to twenty forty, which is very
detailed but goes through kind of like what this looks

(41:43):
like at like more of a human level, how society evolves,
how the economy evolves, how work evolves, right, and like
very detailed, just like AI twenty twenty seven. Is that
kind of focused on the parts of A twenty seven,
then I think it kind of deliberately ignores maybe can
call them mild blind spots or whatever, right, But like,

(42:04):
but that's interesting because that kind of thinks about, like
what types of jobs are there in the future. There
a probably lots of lawyers actually, right, because you know,
the law is very sluggish to change, especially in a
constitutional system where there are lots of vitail points. Right,
probably high end service sector. You know, you go to
a restaurant because everyone's for a lot of people are
rich now, right, and you're flattered by the attractive young

(42:25):
server and things like that is kind of highly kind
of like catered and curated experiences. I guess I have
some faith in humanities abilieved to fight back quote unquote
against like two scenarios that it might not really like
either one, you know what I mean. And like the
scenario where like AI is producing like ten percent GDP

(42:48):
growth or or whatever. Right, man, it's great a few
own stocks that are exposed to AI and tech companies probably, right,
but it's also making that money on the backs of
mass job displacement, and like, you know, it kind of
a cert of confident in the long run that human
beings fine productive things to do. And you know, massive
employment has been predicted many times and never really occurred, right,

(43:09):
But like, but it's not occurring this fast where they
think the world ends in six years or whether they're
predicting or we have utopia in six years, and like
just the ability of like human society to like deal
with that change at these times scales leads to like
more chaos than I think. They're more predicting. But I
think also and I told them this too, right, I
think also leads to more constraints. Right that you hit bottlenecks.

(43:33):
If you have five things you have to do, right
and you have the world's fastest computer, et cetera, et cetera.
But there's like a power outage in your neighborhood, right,
and that's a that's a bottle deck. Right, Maybe there
are ways around it. Youre like go to home depot
and buy a generator or you know what I mean.
But like, but the point is that like you're often

(43:54):
defined by like the slowest link, and politics are sometimes
the slowest link, but also by like Also, you know,
I think the report maybe under states, and I think
kind of in general the a saify community like maybe
understates the ability of like human beings to cause harm
to other human beings with AI, Right, that concern kind

(44:14):
of gets brushed off, is like to pedestrian or like
say to pedestri And I was saying, yeah.

Speaker 3 (44:20):
The exact word I was thinking of. I think that's
a good I mean, I think that's a great place
to end it, because yes, we do need to be
concerned about all of these things about AI. But like
that phrase I think is very crucial, like do not
underestimate the ability of humans to cause harm to other humans.
And I think that that's you know, it's not a

(44:41):
very optim it's not a very pleasant place to end,
but I think it's a really important place to end.
And I think that that's a very valid kind of
way of reflecting on this name.

Speaker 2 (44:51):
Or to trust AIS too much, right, I generally think
that concern is like somewhat misplays but like if we're
handing over critical systems to AI, right, it can cause
problems if it's very smart and deceives us and doesn't
like us very much. It also cause problems if it
has hallucinations, bugs in critical areas where it isn't as

(45:15):
robust and hasn't really been tested yet that are outside
of its domain yep, or there could be espionage. Anyway,
we will have plenty of time, although maybe only seven
more years actually to like explore expore these scenarios.

Speaker 3 (45:35):
Yes, and in seven years we'll be like, welcome back
to the final episode of Risky Business, because the prediction
is we're all going to be done tomorrow. But yeah,
this was an interesting exercise, and I think my my
p doom has slightly gone up as a result of
reading this, but I also remain optimistic that humans can

(45:55):
can do good as well as harm.

Speaker 2 (45:58):
Yeah, my interest in learning Chinese has increased as a
result of recent developments I know about my pe doome.

Speaker 3 (46:05):
All right, Yeah, let's let's do some language emmergis and
I'm with you. That's it for today. If you're a
premium subscriber, we will be answering a question about whether
mface can ever be plus ev right after the credits.
And if you're not a subscriber, it's not too late.

(46:27):
For six ninety nine a month the price of a midber,
you get access to all these conversations and all premium
content across the Pushkin Network. Risky Business is hosted by
me Maria Kannakova.

Speaker 2 (46:40):
And by me Nate Silver. The show is a co
production of Pushkin Industries and iHeartMedia. This episode was produced
by Isabel Carter. Our associate producer is Sonia Gerwit. Sally
helm is our editor, and our executive producer is Jacob Goldstein.
Mixing by Sarah Bruger. If you like the show, please
rate and review us. You know we like we take
a four or five. We take the five. Rate and

(47:01):
review us. To other people, Thank you for listening.
Advertise With Us

Popular Podcasts

Stuff You Should Know
Dateline NBC

Dateline NBC

Current and classic episodes, featuring compelling true-crime mysteries, powerful documentaries and in-depth investigations. Follow now to get the latest episodes of Dateline NBC completely free, or subscribe to Dateline Premium for ad-free listening and exclusive bonus content: DatelinePremium.com

On Purpose with Jay Shetty

On Purpose with Jay Shetty

I’m Jay Shetty host of On Purpose the worlds #1 Mental Health podcast and I’m so grateful you found us. I started this podcast 5 years ago to invite you into conversations and workshops that are designed to help make you happier, healthier and more healed. I believe that when you (yes you) feel seen, heard and understood you’re able to deal with relationship struggles, work challenges and life’s ups and downs with more ease and grace. I interview experts, celebrities, thought leaders and athletes so that we can grow our mindset, build better habits and uncover a side of them we’ve never seen before. New episodes every Monday and Friday. Your support means the world to me and I don’t take it for granted — click the follow button and leave a review to help us spread the love with On Purpose. I can’t wait for you to listen to your first or 500th episode!

Music, radio and podcasts, all free. Listen online or download the iHeart App.

Connect

© 2025 iHeartMedia, Inc.