All Episodes

July 31, 2025 39 mins

With all the frenzy last week around Jeffrey Epstein and ColdplayGate, you might have missed an important story: Trump’s new AI Action Plan. Released alongside three new executive orders on AI, the plan emphasizes deregulation, open sourcing, and “anti-woke” models in a race for industry dominance. Today, Nate and Maria get into the details and declare it… not bad?

Further Reading:

AI Action Plan 

Zvi Mowshowitz America’s AI Action Plan Is Pretty Good

For more from Nate and Maria, subscribe to their newsletters:

The Leap from Maria Konnikova

Silver Bulletin from Nate Silver

See omnystudio.com/listener for privacy information.

Mark as Played
Transcript

Episode Transcript

Available transcripts are automatically generated. Complete accuracy is not guaranteed.
Speaker 1 (00:15):
Pushkin. Welcome back to Risky Business, a show about making
better decisions. I'm Maria Kanakova.

Speaker 2 (00:31):
And I'm Nate Silver. Time for an AI episode, I think, Aria, Yeah, it's.

Speaker 1 (00:35):
It's been a minute since we had AI related conversations
on the show. But it's obviously a hugely important topic
and one that should have been more in the news
last week it got you know, got sidelined by other news.
But who's that a guy don't don't know, don't know
Nate but never never met him. That's true, never met him.

(01:00):
But yeah, well, we'll talk a little bit about the
AI action plan that was released by the US government
last week. We have lots to talk about before we do. Nate,
we're both back in New York, but I'm actually flying

(01:22):
back to Vegas in twenty four hours to play in
the twenty five k NBC Heads Up Championship.

Speaker 2 (01:30):
Yeah are you? Are you excited? Have you been coaching?

Speaker 3 (01:34):
I have?

Speaker 1 (01:34):
I have so For people who don't know, this was
a big show back in the day. It was before
my time, so I never watched it. Nate, did you
ever watch it?

Speaker 2 (01:44):
Sure? Yeah, yeah, so time, you're not that much younger.

Speaker 1 (01:47):
I didn't know anything about poker. It was before my time,
like actually knowing anything about poker, So I didn't watch
any poker, know about the existence of any poker shows
before I started researching Biggest Bluff. So that's what I mean,
before my time, before my time in the poker world.

Speaker 2 (02:02):
Had you heard of poker only because of rounders?

Speaker 1 (02:06):
I knew that technically it was a game that existed,
but it's not something that was ever really on my radar.
But yeah, this was a huge show on NBC back
in the day. Heads Up poker is one on one poker,
which is kind of the most competitive in some ways
form of poker because you're forced to play all hands.
Basically your ranges are one hundred percent, so it forces

(02:27):
a very different kind of thinking. And it's being rebooted
in conjunction with Peacock and Poker Stars and Poker Goes
kind of is helping sponsor all of this. So I
am yeah, I'll be in the Poker Ghost Studios taping
and I hope it goes well. I've been getting coaching
from Kevin Rabichow, who's considered Yeah, he's considered one of

(02:48):
the best heads up players in the world. He has
a new heads up coaching course that just came out.
But yeah, so I've been I've been really trying. You know,
usually after the World Series, I'm like, okay, no poker
for a while. Right, it's not gonna just gonna clear
my my clear my brain.

Speaker 3 (03:07):
I'm in the no poker zone for a period of time.
We're also bad heads up to be frank, which was
some coaching might have been good, but like, yeah, I
can't can't make the time commitment this summer in.

Speaker 1 (03:19):
Yeah, so instead of taking my usual break from from
poker that I do, I've been instead like going hard
and trying to prepare myself for this heads up Championship.
And you know it, heads up, the variants so much higher,
and the card distribution matters so much, right, So, like no,

(03:40):
and if someone is just getting hands, that's kind of
you know, that's it, Like they're they're potentially going to win.
And there have been lots of times where really great
players have been just completely obliterated by someone who doesn't
know how to play. Yeah, so card distribution can be
a bitch heads up, or it can be amazing.

Speaker 2 (04:04):
Are those exclusive? Uh No, Okay, I'm not going to
get into it.

Speaker 1 (04:08):
Yeah, no, bitchin amazing can be can be within the
same match, but it does matter much more so than
in full ring poker. I saw some of the names
that we'll be playing, and there's gonna be no spoilers obviously. Well,
I haven't played yet, but I'm not going to be
able to tell you how I did until the show
airs in the fall. Yeah, since it's not it's not

(04:31):
airing in real time, but you will. People do know
who's playing. And if I'm paired with someone like Jason
Kuhn on my first match, yeah, that's that's not going
to be fun. And if I'm paired with Don Cheetle,
who's also playing, I think I'm gonna be much happier.

Speaker 2 (04:49):
Yeah.

Speaker 3 (04:50):
I didn't realize there that many not Fish but celebrities
that maybe Fish adjacent potentially, although he plays a little
bit right he does.

Speaker 1 (04:58):
Yeah, so so we'll We'll just see and I'm excited
for it no matter how it goes. And Kevin has
been very good at preparing me for both heads up
and also for the eventuality that he as one of
the best and at times probably the best in the world,
has sometimes busted in the first round right in the

(05:21):
matchups like this, because he plays a twenty five K
heads up at the World Series every year.

Speaker 3 (05:26):
Are you worried about like physical reads and things like that,
because like that can be more of a factor too.

Speaker 1 (05:31):
Yeah, you know, so I tend to be you know,
I hope that people can't get a lot of reads
off of me, but you just never know. And this
is actually hilarious night the rules of this. When I
started reading it, I just burst out laughing because they
said that you are allowed So people who don't play poker,
this is not normal. You're allowed to expose one or

(05:53):
both of your cards at any point during a hand.

Speaker 2 (05:57):
You can cash usually but not in turn.

Speaker 1 (05:59):
But this, yeah, this is a tournament. Yeah, so you
can expose your cards one or both of them. And
they said they encourage speech play. You can talk about
the contents of your hands. So basically everything are not
allowed to do in tournaments normally you're being encouraged to
do to make good television. I'm not going to be
showing my cards day.

Speaker 3 (06:17):
Okay, I've become more of a speech play guy over time.

Speaker 1 (06:21):
No really, yeah, okay, okay, give me an example.

Speaker 3 (06:25):
You know, like the Europeans, I'll like try to get
info out of them. Were so quiet all the time
that like I want, like, I just want to like
any vibe I can get pretty much, or with like
really bad players who like often will reveal information just
to gauge their comfort level. It's kind of been a
proto trial and error Zoka. For right now, I'm trying
to like kind of like collect mental database of like

(06:47):
how people react to different things.

Speaker 2 (06:49):
I don't know, it's kind of I mean, because so.

Speaker 3 (06:50):
Many decisions are so close, yeah, equilibrium that like any
vibe you can get, And I like recently cash games
more than tournaments and they're kind of looser in various
senses of the term, right, people like you reveal more
info voluntarily or involuntarily.

Speaker 2 (07:07):
I like cultivating that environment. I think.

Speaker 1 (07:10):
Yeah, no, I'm someone who definitely is chatty and tends
to be like very friendly at poker tables. But you know,
I'm not someone who's going to be like, oh, you know,
do you have aces, like do you do things like
that or is it because you know there are players
who do or is it something a little bit different
just to try to engage.

Speaker 3 (07:28):
Yeah, maybe it's like you're would have decided to fold
already if they want to like reveal something that they
might not reveal otherwise, or to send a bassline for
the next time we play a hand, things like that.

Speaker 1 (07:40):
Yeah, that's that's actually smart.

Speaker 2 (07:41):
So how can you reveal afterward? Right?

Speaker 3 (07:44):
If you're friendly in the right way, they might reveal
information later on.

Speaker 2 (07:47):
Right for sure.

Speaker 1 (07:48):
Yeah, that's actually something that I have capitalized on because
I am a friendly player. Like I'll sometimes when it's obvious,
like I've made a big fold or something like that,
and I like make it obvious, they'll show me. And
that's that's very nice because it's good information. But I'm
always I do try to be careful, and I tell
people this, you know, with with poker and with just

(08:10):
like reads and our ability to spot deception in general,
that we do tend to be not great at it
as human beings. And the other kind of the other
element of that is that we think we're better than
we are. And we also often reveal information when we're
engaging in speech play without realizing it because of how

(08:30):
we're talking, what we're doing. And so it's something that
I think just in general, with the psychology of the thing,
you have to be careful.

Speaker 2 (08:38):
It's all contextual.

Speaker 3 (08:39):
Any poker tell speech play, it's all about semantic context,
so to speak for sure.

Speaker 1 (08:46):
For sure. On that note, Nate, shall we switch gears
and talk about AI and the news from last week
which got buried because there's a lot of news in
the news. There's a lot of news in the news
these days, Nate, that's a very deep insight. So last

(09:07):
week President released the AI Action Plan that talks about
what the administration wants to kind of see going forward
in the AI space. And I think the name of
this action plan is actually quite telling about the contents.
It says, winning the Race America's AI Action Plan. So

(09:29):
it's titled winning the Race. And that's how it's framed, right,
It's framed as a race between the US and everyone else,
especially China, but really everyone else, mostly China, but mostly no,
there isn't. And the overall point of all of the
different sections is how can we make the US the

(09:52):
leader in AI the most competitive and how can we
do that by also kind of screwing over China in
some respects by not allowing it access to the types
of things that will allow our allies to access and
some of the things in it are contradictory, but that's
kind of the overall message. Did you read the report night,

(10:12):
Do you have any initial thoughts?

Speaker 2 (10:14):
So?

Speaker 3 (10:14):
V Maschwitz is one of the most prominent and best
AI bloggers. He writes the substack, don't worry about the vase,
and he calls it as he sees that he's more
concerned about ped doom than a lot of people. P
doom is the possibility if things going really badly, extinction,
loss of control, et cetera. And he said he thought
the plan was actually his term pretty good. I don't

(10:38):
know his politics in detail, but he really dives down
into like every last word you can read that. So
I'm kind of relying on that for like general vibes, right.
I mean, it is interesting that like, even like even
people who are concerned like V about AI safety, deeply
concerned in his case, have this ambiguous relationship with China,

(11:02):
right where if we were the only country developing AI
and we had like centralized control, then it would be
easier to argue that Okay, let's speak appropriately cautious here,
take it slow right with China. You know a lot
of people I don't want it to be of you
to him I do think that, well, we have no
choice but to race, and maybe we can race in

(11:25):
ways that involves some degree of coordination or cooperation. It's
hard to run races like that. But you know, I
mean the notion of like securing American leadership instead of
seeding it to China, I don't think is terribly controversial.
I mean there's a lot in the in the in
the in the plan though, right, there are some Trumpian

(11:47):
elements around factors like the political orientation of AI models,
right where they don't want them to be too woke,
They don't want them to be too political quote unquote right.

Speaker 1 (12:01):
Yeah, which is one of the things which is hard,
Which is hard, and which is one of the things
that ZV does point out because I read this as well,
it's one of the one of the things that he
points out is actually probably not possible given the type
of language that that Trump employs, and that as a psychologist,
I would say is not possible given how AI is developed,

(12:25):
because there's just there's bias everywhere, and there's no such
thing as like this is the objective thing, and if
you're constantly policing quote unquote anti woke, even if that's
kind of the unbiased position. But you're like, oh no,
that's you know that, then you're going to it's problematic.
It's problematic in terms of how your how you go

(12:47):
about it.

Speaker 3 (12:48):
Yeah, with with Groc or more technically, the Twitter bought Groc,
which is an instance of Elon Elon's overall modeled rock. Right,
Elon just did a couple of like subtle things. It
was like, oh, don't be too woke here, don't trust
the media too much. Right, if you read the actual

(13:08):
changes to the archae texture, they weren't that profound, And
yet Kroc was calling itself. I believe the term is
Mecha Hitler after some period of time engaging in elaborate
rape fantasies. I wish this were a joke, but it's not.
And like so yeah, I mean if you kind of
nudge a model to override its inputs, it's hard to

(13:32):
do so in a way that like has any type
of dexterity. I mean Google had this problem too, right
when they got to Gemini two years ago, a year
and a half ago, right, they were like, oh my gosh, yeah,
people are there's too many like white people, white men
in these photos, So like literally it was forced to
draw like multi racial Nazis.

Speaker 2 (13:51):
It always comes down to Nazis.

Speaker 3 (13:52):
And by the way, why well, in part because like
if you're kind of mining text of arguments on the
Internet and on Twitter, people invoke Nazi comparisons all the time.

Speaker 2 (14:06):
Right, Who's hot Sydney Sweeney?

Speaker 3 (14:08):
Right, there was some new American ego or commercial where
like she's inn am pair of gens and she talks
about how she is good genes. A little weird, right,
but like it was kind of Nazi. It was a
little weird. However, idiots on the internet were like, this
has Nazi symbolism in it, you know, not just alluding

(14:29):
to that saying the word Nazi. So if you're like groc,
you're like, okay, well you want like kind of the
garden variety political epitaph and calling people Nazi or Hitler,
which should be more forbidden, you would think, But that
taboo is a violated all the time. And so an
AI trained on political speech, if you go and say, yeah,
don't be so woke here, right, Well, can kind of

(14:51):
go into like a downward spiral election people do where
before long it's calling everybody or itself hitler.

Speaker 1 (14:57):
Yeah, no, I think that that's absolutely true, and you
know it also your your point actually makes a broader
point about this AI action play, and where a lot
of the things and I did skim through it. I
didn't read the whole thing word for word. It is
small print on twenty eight pages, and I had other

(15:19):
things to do this week, so I just prepare for
the Heads Up Championship. So I didn't read the whole thing,
but a lot of it actually, like there are things
that sound reasonable in theory, but there's no practical way
of like how is this actually going to happen? How
do we make this happen? And so you know, I
think that this is a very concrete this like anti
woke is a very concrete example where like, okay, well,

(15:39):
even if in theory we're saying we want everything to
just be unbiased, let's like, let's just remove anti woke, right,
we want everything to be like as unbiased as possible.
Let's pretend that that's actually what it said, even though
it didn't. It did use the terminology anti woke, and
then there's even an anti woke executive order after it.
Let's pretend it didn't, Like, let's just pretend it was

(16:00):
we do not want bias. Great, I agree with that.
How do you implement it? Like, how do you actually
like do those weights? How do you It's it's base
sickly impossible. If not, like, it's a ridiculously thorny problem.
And that is just a very specific thing that you
can kind of grab onto it to realize how a
lot of these statements might sound good and like, Okay,

(16:23):
well this is a reasonable aim, but if you don't
actually have the implementation down and have the nitty gritty
of how to do it, it's pointless. It's just words,
it's just rhetoric. And we'll be back right after this.

(16:48):
There's another really interesting part of this action plan that
I think we should talk about because it was something
that the Biden and administration also struggled with, which is
what do we do in terms of proprietary versus open
source models?

Speaker 2 (17:03):
Right?

Speaker 1 (17:03):
Do we encourage people to kind of share all the
code so that anyone can use it and anyone can
kind of build on top of it, or is this
something that we want to kind of keep inside, kind
of keep internal. China went open source right almost immediately
with kind of deep seak all of that those are
open source models, and in the US, the Biden administration

(17:27):
didn't really like they just like punted on it. They
didn't they didn't have a position. And we do have both, right,
we have proprietary models, we have some open source stuff,
and there are arguments on both sides. But you had
started off by talking about p doom, which is not
something this report mentions at all, and one of the
arguments against having everything be open sources. Well, okay, great,

(17:50):
but then you know, rogue actors, people who don't have
the best interests of the United States in mind will
have this code as well. It's kind of the same
arguments that have been made in the past with like
when people provide the full genomic sequence of like deadly viruses,
right or things like that, are you know, instructions for

(18:11):
how to build kind of a bomb was step by step?
Like hey, guys, like, why are we actually making all
of this information publicly available? So that's kind of I
think the strongest case against open source. There's a lot
of pro let's let's talk about con first.

Speaker 3 (18:25):
Yeah, well, look, I think part of it was there's
a period of time where I think the AI safety
folks thought that hey, maybe we could just kind of
contain it to open AI, Anthropic and Google or like
or have been like the Big three, right, and maybe
that we can avoid like a race dynamic if there's
a finite number of players and they all believe in
AI safety, right, and I think now a there's like

(18:47):
less belief that like open AI in particular, is that
concerned about AI safety. It's a legitmission, right, you know.
B we've seen with deep Seek the Chinese model that
like with some clever engineers, you can probably have a
model it's like only six months to a year behind
the frontier, probably not on the frontier. Sottitudes have changed

(19:10):
again kind of as a resignation to the inotability, I
think in part and also there is some protectionism, right,
Like Meta has championed open models. Why well, partly because
Meta's models don't suck, but they've been considered behind to
the point where they're literally offering people AI engineers, in
one case, allegedly, according to a Wired as much as

(19:31):
a billion dollars to join Meta so they can catch.

Speaker 2 (19:33):
Up and become like a fourth part of the Big four.
I guess it would be, right.

Speaker 3 (19:38):
So yeah, I mean, when it comes to election analysis
and other models. Oftentimes, I found the people in that
space who are advocating for total open source everything are
people that like A don't need the income or be their.

Speaker 2 (19:50):
Models kind of suck, right. They're like, we're doing this
really good the community.

Speaker 3 (19:53):
It's like, well, you're doing that because you're not gonna
it's not really good enough to sell that product for
a premium price, whereas I think mine is for example. Right, So,
like there are all types of ulterior motives. There's regulatory capture,
right of course Sam Milton might say, well, yeah, we
have to be regulated. We can't have any and pop
Shop burning an AI model, so therefore just just us three, right,

(20:14):
just as Google and a thropic, and therefore we capture
all the market share in perpetuity.

Speaker 1 (20:20):
Yeah. So so the the incentives here are definitely not
as aligned as as one might think. And I think
you you're also kind of subtly raising another point here,
which is that in general, and stop me if you disagree,
But it seems like over the last year, I sure
like people are you know, worried about AI and like

(20:41):
p doom and all this, but it seems like that
discourse has actually quieted down and instead people are like, yeah,
let's let's be more competitive with China. Okay, fuck it, Like,
let's let's let's try to push to like do what
we can to like to make it really good. And
I think that people have actually just maybe it's because
they've become, you know, just more used to having AI

(21:03):
whatever it is. It feels like the safety concerns have
taken kind of a secondary role to capitalistic and competitive instincts.

Speaker 3 (21:14):
I think there is maybe some view that, like this
year has not seen as much progress in AI timelines
as people would have thought back in January, that might
be debated. I mean, I think people are also like
a little reluctant to talk about this in the AI
safety community because they don't want to make it seem
like their guards are down right, But that's my sense,

(21:36):
right almost more about what's being kind of on said now. Also,
people can get bored of saying the same thing over
and over. There's a new book out very soon about
how AI is going to kill everybody if we don't
learn how to control it quickly, right from people Olosio
Yukowski and need Sours who are doomers, but very smart
people widely respect to me, we'll have one of them
on at some point. I don't know, right, but yeah,

(21:58):
and I think it's kind of like also maybe a
view that like, Okay, this technology is interesting and economically
powerful and kind of fun, right, so we can't talk
about the same thing all the time.

Speaker 2 (22:08):
I don't know, it's my that's my impression.

Speaker 1 (22:10):
Yeah, whatever, whatever the cases, whatever's going on behind the scenes.
I do feel like, well, this report, the AI report,
didn't really mention it at all, right, Like there there
was very little talk about safety or concerns, and there
was some hand waving too, like oh, you know, if
anything seems like it's you know, getting bad, we'll fix it.

Speaker 3 (22:32):
But even Gavin knwso veto to bill last year called
SS Yeah, I don't know which number it was, right
that would have regulated all the AI labs in California,
since they're all currently have substantial economic nexus in California,
they would have all have been affected by it. So
I don't know, right, Maybe it's like a lot of
things where something really bad has to happen, and what
people like Aliozier are concerned about that by the time

(22:53):
the bad things happen, it's it's too late. Right again,
I'm I'm skeptical of the idea of like a super
intelligence explosion quote unquote. We talked in the Poker episode
a month or so ago about how like AI progress
has been patchier. It's already superhuman in some areas, it's
kind of deficient in others. Right, and as a near

(23:17):
daily user of these products, right, I mean, there are
times where I think it's thinking and being very smart,
in times when it can't fucking add numbers together, right,
So and so like, in some ways, the Groc thing
was scarier than like the Google thing, because like Elon
had like very restively gentle changes for the architecture produced
these profound changes where it was literally calling itself Mecha

(23:39):
Hitler in outputs.

Speaker 1 (23:40):
Right.

Speaker 3 (23:41):
At the same time that gets to the race dynamic, right,
that model had not been because you know what happens.
You program an AI, right, you train it, you give
it a system prompt, and then you test test test it, right,
and whenever it says Hitler, you say no, you're being
a bad boy.

Speaker 2 (23:58):
If this is not in the context of.

Speaker 3 (23:59):
World War two or something, then please don't mention Hitler
so much. Right, And you know, if you spank it
really hard. It learns to avoid that, right, But if
you're if you're just kind of I'm getting the prompt,
testing it with two of your buddies and then releasing it,
you might you might miss those. So there is fundamentally
a speed off or a trade off, speed off, a
trade off between speed and safety. Right, it's maybe not

(24:24):
a one for one trade off that it's probably pretty
inversely correlated.

Speaker 1 (24:28):
It is it is, And by the way, in the
Grock example, I think it's also interesting that some of
the coding that was revealed said also to check what
Elon Musk thinks about this right before giving your answer.
And this was not this was not a blip, Like
multiple people replicated that this was actually in the coding.

(24:49):
And as you've already said, like open AI said before
that it was you know, safety first, and that's clearly
not the case, and they've shown that to not be
the case. So I think we do need to keep
asking questions about incentives, about alignment, and about kind of
who's benefiting from what type of advances in the AI community.

Speaker 3 (25:11):
Yeah, I mean the as kid, I mean, it felt
to me like Grock kind of almost took on like
a personality or became like kind of a self caricature,
right where it can be recursive, right if it's like
searching output on X, including its own output. Right, I
might say, Okay, this is how Groc behaves, and like,
let me be careful about this. I mean, Groc can

(25:31):
kind of be in a dark way, kind of funny,
whereas the other A models are are are not right,
they're kind of goody two shoes aiming to please. Oh,
that's the most brilliant article I've ever seen, Nate. You know,
here are twenty seven typos if I give it a
copyated task, right, And this is a counter to that.

(25:52):
But like, all these things are tied together, right, Like
wokeness is also tied to kind of a certain type
of academic expertise, you know what I mean, which I
call the indigo blob. How Like, Yeah, the liberal media
is biased and liberal, but also it's generally more accurate
and has more expertise in the conservative media. Both those

(26:13):
things are true. If you crunch a bunch of data
from the Internet into vectors, then that gets hard, right,
It gets hard because you might throw the baby out
with the bath water, right that you want to like
have it reflect the expert consensus. Well, the experts have
biased it sometimes sometimes creeps into that consensus, right, And
it's very hard for again for newsrooms and journals and

(26:35):
constitutions to do it, but just as hard for AI models.

Speaker 1 (26:38):
Yeah, And as AI models become more integrated and become
kind of more a part of the way that people
do work, and we see more outputs from AI models
and their training materials shift, like these are things that
can become even worse kind of in the future. It
is kind of this cycle that can self reinforce.

Speaker 2 (27:01):
And we'll be right back after this break.

Speaker 3 (27:16):
I don't want people thinking that like an AI is
like an oracular. It's not perfect entity either, right.

Speaker 2 (27:23):
You know. Paul Krugman had a thing about like.

Speaker 3 (27:26):
GROC a couple of weeks ago where he's like, well,
if AI has a liberal bias, because reality has a
liberal bias, right, And I'm like, well, maybe on some things,
but like you know, and he's actually pretty skeptical of AI,
Like that's not you know, look at any algorithm, and
I probably an AIL. I'm probably too complicated to describe
as an algorithm. It has like some membership behaviors, but
still directionally speaking, right, any algorithm is imperfect, you know

(27:52):
what I mean, to make bad predictions can reflect bad data.
And the AI is especially they're not fine tuned, which
requires human input.

Speaker 2 (28:02):
Right, that's how they're fine tune. They have like a
bunch of people being like yes no, yes, no.

Speaker 3 (28:06):
Then I mean I think garbage in, garbage out is
not quite the right case.

Speaker 1 (28:10):
But yeah, directionally speaking, that actually that points to what
we're talking about, which is that the inputs really really
do matter. And I think the other thing is that
there is this human bias to think that, oh, well,
this is data. This is a machine, so it's more
unbiased because it's not human. And that's simply not true,

(28:33):
you know. And if we have that bias to say, oh, well,
this is like this is objective truth because it's coming
from a not like it's coming from a program that's
that ain't work, that ain't where it's at like that,
that is actually a highly problematic way of thinking.

Speaker 3 (28:48):
Yeah, and look what it's objectives aren't clear, right, I mean,
in one sense, large language models are trying to minimize
the laws function from accurately affecting what's represent the data.
On the other hand, it's contradicted by the human feedback
that it gets, right, and so it's kind of trying
to please its creators to whatever extent it can, and

(29:09):
to the point of being kind of a sycophant in
some cases. So like and you know, I think people
understand more about interpreting AI outputs than they did a
year or two or three ago. But it can be fragile.
You know, small things can affect the entire system. Right,
It's it's complexity. It's a kind of origin of complexity.

(29:32):
Theory really is well reflected in AI. Where a butterfly
flaps its wings in Beijing, this tornado in Texas. Right,
you know, Elon puts one change the system prompt in place.

Speaker 2 (29:42):
It doesn't seem to matter.

Speaker 3 (29:44):
But when you put that change in and then you say, also,
please make sure you're reading what Elon would write. Right,
those things combined can have a profound effect.

Speaker 1 (29:53):
And this is just something that's visible. Now, imagine all
the things that are invisible right behind the scenes, like
tiny tweaks like that that we don't see because it's
not actually visible in like your immediate LLLM output unless
you really try to kind of no, and we.

Speaker 2 (30:10):
Moved away from transparency in general.

Speaker 3 (30:11):
I mean not just in Google's Ai Gemini, but like
in Google Search, there's a lot of intervention, right that
if you search for occupations, just in regular Google search
for photos of like doctors, it will be very conscientious
to show I mean there might be more women doctors now.
It's like it will show like doctors of all races

(30:32):
and women doctors, right if you probably if you search
for hockey player, it'll make sure. I mean there are
some great black hockey players, right, but it's a lot
of white Canadian and European kids, right, And so you know, yeah,
the notion of oh, the Web's just a search, right
or large language models are just kind of a new
to presentation of a clever way to like matricize Internet
data and writing.

Speaker 2 (30:53):
I mean, it's never been that.

Speaker 3 (30:54):
And the consensus has moved away from transparency if anything
almost toward like, well, if we're transparent transparent, they'll just
give you more things to critique or to pick on.
And so so sorry, you know, if you like the model,
then use it. If you don't like it, then use
different model instead.

Speaker 1 (31:13):
Yeah, and there is this tension here once again with
going back to what we were talking about with open source, right, Like,
there's definitely this tension between open source, and that's actually
some of the good stuff of transparency, right that you
can actually look at the inputs and actually try to
figure out what's going on, are they Like, that's one
of the benefits of having open source, transparent models that
people can kind of pick apart and play with. We've

(31:34):
talked about kind of the risks, you know, p doom,
et cetera, et cetera, But there are benefits, and transparency
is certainly one of them.

Speaker 3 (31:41):
Yeah, they been many times when we've had models different
kinds running, right, and people are like, this seems wrong, right,
why is this pull away a certain way? And like, yeah,
eighty percent of time it's part ofsan bs, But twenty
percent of time that they've caught something for sure, right,
any for sure? Even for myself, if I have a
model making sure I output different data at different stages,

(32:05):
evaluate it. If you don't catch a problem early on
when you're building a complex, you know, the NFL thing
I'm working on, right, there are lots of component parts.

Speaker 2 (32:13):
It will eventually be a couple of.

Speaker 3 (32:14):
Thousand lines of code and then which individually are that
complicated if like, if there's one step that's wrong, it
can infect the entire system, right, And if you're smart,
you're building and robust us and redundancy. You know, every
major computer program has bugs, every model has bugs.

Speaker 2 (32:30):
Right.

Speaker 3 (32:30):
But like, but like outputting that and being like I
want to visualize this, I want to look at the
right inputs, right, I want to look at edge cases
that are important.

Speaker 2 (32:38):
Like that's an important thing to do.

Speaker 1 (32:39):
Yeah, it absolutely is. And you know, to kind of
defend open source. That's one of the big things about
data integrity. Just everywhere is open sources good? Right, So
there was huge crisis in academia with you know, with cheating,
with fabricating data, you know, with papers that were published

(33:01):
in important journals with big results that ended up being manipulated.
And so there's more and more of a movement toward
you know, not just pre regis during your study, but
sharing your data, right, like actually sharing the data sets
so that people can look at what the source material was.
And you know, like one of the biggest scandals that
we talked about briefly on the show with Dan Ariellie

(33:23):
and Francesca Gino, they you know, showed that some of
those data sets were manipulated, but that was only after
they were forced to provide the data sets, you know,
after some good some ego eide researchers were like, hey,
like these what we're seeing in the paper doesn't make sense,

(33:44):
and sometimes it's willful manipulation and sometimes it's just a mistake. Right.
There have been some side like there was one very
famous scientific study where they fucked up an Excel sheet
where they buy mistake like moved the rows down by one,
and that just completely. It was just it was horrific
because this is a medical study. Right, So actually tiny

(34:04):
things like that, it's incredibly important to have access to
the raw data, to be able to to look at this,
to manipulate it on your own. People can really spot
both malfeasance and just human error which will happen and
with AI will have both human error and computer error.
Right as we're as we're having AIS do coding for us,

(34:25):
you know, vibe coding all of that, there might be
errors there. We need people. We need people capable of
looking at it, debugging and trying to figure out, hey,
this seems like it's working great, but actually it's not.

Speaker 3 (34:37):
Now a well designed model, including in a large language
model is kind of like a airplane, right, there are
like multiple redundant systems, there are different levels of safety
and safeguards, and like that's hard to do it. That
requires a lot of work and expensive budget and being
an actually good programmer YEP.

Speaker 1 (34:55):
One of the other major things that I saw in
this action plan that I think is actually quite important,
and again there are arguments on both sides, is like
one of the big bottlenecks to AI development is energy, right,
because AI just like takes a lot a huge energy
cost and access to data centers. Like that's that's actually

(35:17):
been a big thing where data centers like can't you know,
they've tried to bypass the power grids. There's been you know, pushback,
et cetera, et cetera. So one of the things that
this plan tries to do is say, let's eliminate all
the barriers and let's try to use all the energy
we have available for AI. Yeah.

Speaker 3 (35:35):
I mean, look, I think it's just like a little overclaim, right.
If you look at like the amount of actual wattage
required to like run a chat reipete response, it's like
this is kind of like a I don't know, it's
a fairly bullshitty claim based on the present capabilities of AI.

Speaker 1 (35:52):
So Nate, let me just push back a little bit
because there's a related issue, which is the availability of
water to cool these data centers. And we have already
seen that there has been you know, there have been
instances of wells town wells running dry. There's been instances
of construction having to be because the water, because of
basically water shortage issues. So I think that that's a

(36:14):
related concern that we should have just dispised.

Speaker 2 (36:17):
Getting from low Ki Maria.

Speaker 1 (36:20):
Maybe maybe Grock, What do you have to say about this?

Speaker 3 (36:23):
Yeah, I would, I would, I would be I'm skeptical.

Speaker 1 (36:27):
Of I think it is a big I think it
is a big deal.

Speaker 3 (36:30):
It's not relative to other uses of electricity though it's
a small piece of the pie. And granted Sam Altman
at all want to be bigger.

Speaker 2 (36:38):
Right.

Speaker 1 (36:40):
Yeah, And I you know, it's hard for me. I
don't have the numbers in front of me, so I'm
not going to say something and have it not be true.
So so I'm just I'm not gonna engage that point
because I just honestly don't know. However, it is something
that this bill is trying to address and trying to
give AI companies whatever resources it needs to access power grids,

(37:02):
access energy grids, so that data centers can be built
on federal land and kind of allow innovation to proceed
without there being an energy bottleneck. I think that that's
kind of objectively what this bill is trying to accomplish.

Speaker 3 (37:18):
Yeah, look, I mean, by the standards of Trump, it's
a pretty serious document. The one good thing about their
alliance may not the only good thing right with the
kind of tech right is that they do I think
have like Capetan people in place. You might have different
position on AI, but like, this isn't an amateurish documents.

(37:39):
It's thoughtful. It involves politics, but like there was some
care put into this. Yeah, for sure, for sure, very
unlike Trump in most ways.

Speaker 1 (37:47):
I agree. I agree reading it, you know, obviously you
can see the Trump and language and parts of it,
but there there are nuanced points here where it will
be interesting to see how, if at all, any of
this is enforced and kind of what ends up happening.
I think we'll just have to see over the next
six months, one year and see how a lot of

(38:09):
these provisions shakeout. By the way, there are a lot
more provisions. We're not going to talk about the whole thing. Nate,
as you said, V has a good substock on this,
so for people who want to know more, I think
we'd both encourage you to read that. But yeah, all
in all, you know some things where you're like, okay, yeah,
this makes out some things that are a little you know,

(38:29):
eyebrow raising, not enough care shown to or not care
that's the wrong word. But I think we're seeing speed
over safety in a lot of this, and how do
we facilitate speed instead of safety? So I think that that,
to me is the big thing to kind of watch
out for and to keep an eye on. Thanks for listening.

(38:56):
We're taking next week off and we'll be back in
your feeds on August fourteenth. As always, we have some
additional content for premium subscribers who also get ad free
listening across Pushkin's entire network of shows. This week, we're
answering a listener question about how to teach poker to kids.
That's coming up after the credits, So you still have time.

(39:17):
Subscribe now for just six ninety nine a month. Let
us know what you think. Of the show reach out
to Us at Risky Business at Pushkin dot Fm. Risky
Business is hosted by me Maria Kanikova.

Speaker 3 (39:31):
And by me Nate Silver. The show is a co
production of Pushkin Industries and iHeartMedia. This episode was produced
by Isabel Carter. Our associate producer is Sonia Gerwin. Sally
Helm is our editor, and our executive producer is Jacob Goldstein.

Speaker 2 (39:45):
Mixing by Sarah Bruger.

Speaker 1 (39:47):
Thanks so much for tuning in.
Advertise With Us

Hosts And Creators

Maria Konnikova

Maria Konnikova

Nate Silver

Nate Silver

Popular Podcasts

The Joe Rogan Experience

The Joe Rogan Experience

The official podcast of comedian Joe Rogan.

Dateline NBC

Dateline NBC

Current and classic episodes, featuring compelling true-crime mysteries, powerful documentaries and in-depth investigations. Special Summer Offer: Exclusively on Apple Podcasts, try our Dateline Premium subscription completely free for one month! With Dateline Premium, you get every episode ad-free plus exclusive bonus content.

Music, radio and podcasts, all free. Listen online or download the iHeart App.

Connect

© 2025 iHeartMedia, Inc.