All Episodes

August 14, 2025 44 mins

This week Nate and Maria discuss the release of GPT-5, the latest model from OpenAI. This model promises to be faster, smarter, and more useful while also reducing hallucinations and sycophancy. It also lets users choose among different AI “personalities.” What do Nate and Maria think so far?

Then, they turn to the newly inked Nvidia trade deal, which notably includes a 15% cut of sales to China for the US government

Further Reading:

Ethical Issues In Advanced Artificial Intelligence by Nick Bostrom, 2003

SuperIntelligence: Paths, Dangers, Strategies by Nick Bostrom, 2014

For more from Nate and Maria, subscribe to their newsletters:

The Leap from Maria Konnikova

Silver Bulletin from Nate Silver

See omnystudio.com/listener for privacy information.

Mark as Played
Transcript

Episode Transcript

Available transcripts are automatically generated. Complete accuracy is not guaranteed.
Speaker 1 (00:15):
Pushkin. Welcome back to Risky Business, a show about making
better decisions. I'm Maria Kanikova and I'm Nate Silver. So
today on the show, note, we've got we've got a
pretty interesting Risky Business e news week. We've got GPT

(00:39):
five being released, which is maybe less of a deal
than the launch hype announcements were, but still still a
big deal. And then we have some interesting trade stuff
going on with AI chips right with Nvidio.

Speaker 2 (00:52):
Interesting is an interesting reader?

Speaker 1 (00:55):
Yes, well, well we've got gp.

Speaker 2 (00:57):
Now there have been five GPT since the last GTA
Grand Theft Donald. Sorry to throw that in there.

Speaker 1 (01:03):
Fascinating to well, before we before we get into it, Nate,
I just wanted to say, you know, we're taping this
on the twelfth of August Tuesday, so a few days

(01:24):
before listeners will hear it. But congratulations. Today is the
launch of the paperback of On the Edge.

Speaker 2 (01:32):
On the Edge the Guard We're seeing Everything, the bestseller
by groundbreaking author Nate Silver. Yeah, so the paperback is out.
There's a new forward, or I think it's called a
preface technically. I was just at Barnes and Noble signing
some copies. A little sweaty. I walked here from there.
It's hot in the middle of August in New York,
breaking news. But yeah, no, I think the book holds

(01:53):
up really well. It covers a lot of really risky
business esque topics. And you know, it's a big book,
cheaper now with paback. It fits better on a shelf,
not quite as thick, and there's new content. So I
would strongly recommend, of course. I mean, you know, a
little biased here, but like, thank you.

Speaker 1 (02:09):
Of course. Well, I'm excited to read the new preface,
and yeah, I definitely recommend the book to everyone. I
will try to repost the review I did of it
on my substack so that people can get reintroduced to
it one more time. Anyway, it's a fantastic book. Congrats Nate.
And let's get into some rivarian topics, like the release

(02:35):
of GPT five from open Ai. Nate, have you had
a chance to use GPT five yet we don't.

Speaker 2 (02:43):
Have much choice, you know, they kind of steer you
into GPT five if you And at first I'm like, oh, okay,
I guess I pay for the pro plan and so
I'm like, oh wow, I'm one of the privileged few
and now I don't think there's an easy way to
get back into all the all the old family of
GPT products that we knew in love before, Like people
did get it attached to the different models for different reasons,

(03:04):
and so now it kind of is implicitly trying to
figure out what you want, which is kind of part
of I mean, it's interesting. Right. So on the one hand,
you know, I kind of made the joke before about
kind of comparing it to like a video game release.
But anything new, if we release a new presidential model
or something, right, anything new might have some kinks and bugs.

(03:25):
I mean, as much as you might say, okay, we
have to have it perfect before it's ships, I don't
think it's practical because like most people are gonna only
learn thing. I mean, you know, I think they did
discover that's not doing what like groc did and calling
itself Hitler for example.

Speaker 1 (03:37):
Right, Yeah, I mean I don't know if we could
really call that a win. Like that seems to be
like a baseline.

Speaker 2 (03:43):
Or like Google Gemini drawing multicultural Nazis, you know, so
that the bar is pretty low. But no, I think
it has a little bit of new car, new model
kind of smell a little bit. The first thing I
asked it to do was, my partner are planning a
trip to Bestly, Scandinavia or the Nordics, technically, right, I
want to visit these cities. Give us in our itinerary, right,

(04:04):
and like thinks and things and has something that seemed
plausible to me, very detailed, right, And then I'm like,
emails use a PBF and it like freaks out right
like shows some complicated like you know code, Python code,
and it's like, I don't know how I know this.
I don't I know this, So you know it's it's
up and down. Well, hey, at.

Speaker 1 (04:21):
Least it gave you a plausible itinerary.

Speaker 2 (04:23):
I so.

Speaker 1 (04:26):
We've been warned. We were given an explanation by Sam
Altman because I tried to test it, you know, when
it was just released, and apparently the auto switching tool
was down, so it seemed like it was a lot
dumber than it was that it was supposed to be.
So one of the one of the things that they're
kind of really touting on this model is that it
automatically knows what you want right, whether it should think

(04:47):
deeply or not, which does not actually seem to be
the case. Even now when the swings back, it's like.

Speaker 2 (04:53):
Hey, who got a cheek here? I don't think we
need too much deep thinking. She probably wants a quick answer,
get back to the cooking.

Speaker 1 (04:59):
That's that's exactly honestly, that's that's been my experience. And
one of the first things I actually tested it on
because it says that, you know, one of the things
that it's much better on is on hallucination. So I
actually gave it a psyche question and asked for some
sources and like papers, and it unfortunately still hallucinates when

(05:20):
you go down that route. I was because so I'm
writing a piece this week about kindness contagion. You know,
when someone is nice, you know, how does that spread?
And so I asked it some questions about the research
in that field. And I didn't really need its help
in the sense that I know the field pretty well,
but like, I was just curious to see what it
would come up with. And it gave me some good stuff,

(05:40):
but it also gave me some stuff that just simply
does not exist. And I know, you know, as you say,
and as a lot of people say, like you need
to know what to ask it but the reason I
tested this specifically was because one of their big claims
was no more hallucination basically, which is not true.

Speaker 2 (05:58):
Yeah, I've used it for you know, I almost always
put the articles I'm writing for the newsletter through a
copy of it in fact check, in the GPT models
or cloud. Sometimes I think this is one of the
most useful things that AIS are good with. It took
seven and a half minutes for what, from Nate is
a relatively short article seventy birds or something, Right, I

(06:18):
am using, like the thinking version, right.

Speaker 1 (06:21):
Did you tell it to use the thinking version or
did it?

Speaker 2 (06:24):
I think I was on thinking by default, right, And
usually I say have a high threshold, and I didn't
say that. But it's really nitpicky. It's like, you know,
I made some line about like Elon Musk is like
tweeting out like anime smut was a term I used, right,
I already tone that down from porn. And it's like
you should be more careful, you should say NSFW images

(06:48):
smut implies an editorial stance, you know. So like it's
just like it was kind of nitty as a poker term, right,
and like I I had it actually a friend sent
me a poker hand because i'd we wrote before or
I wrote before about how Chick's bad at poker, and
it played the first hand really well, and then I'm like, okay,
simulate more hands, and then it was still kind of
not not great. Maybe a little better, right, Yeah, No,

(07:10):
it's a little weird because I don't look to me,
you kind of had four point five and you had
like oh three and oh one. Right, Like I noticed
in the spring, I thought some of the reasoning quote
unquote models or the type of stuff I'm doing, which
is not using GPT as a chat bot, right, it's
like a workaid, you know. I thought there was an
improvement then, and I kind of feel like there's a

(07:30):
bit less this time. You know. Also, the open air
models or some of them are kind of slow, right,
I've found sometimes like Clode or Gemini or even groc
will like spit out things faster, and so you.

Speaker 1 (07:42):
Know, and sometimes that matters, by the way, like sometimes
you wanted to take time, but sometimes like you're like, okay,
come on, like let's let's get let's get moving, and
uh yeah that if.

Speaker 2 (07:53):
You can get me a simple you know, and often
like in a fact check, I mean today I had
to go to the pricking Barnes and Noble, right, and
so like the time pressure is often relevant, especially, you know,
and oftentimes I also use the models a lot for
like little programming task right, Like I forget the programming
using language US is called STATA. Some people would say
it's kind of a fake, but you know it's a
it's a real language, right, and I'm like, I forget

(08:15):
how I do this instat or full disclosure, even Excel.
I'll be like, oh guy, what's the complicated formula for this? Right?
And like it seems to me to be seventy to
eighty percent reliable the AI models in general, for like
you know, where I want a snippet of code that's like,
you know, three to ten lines long, right, you can
kind of vibe code, right, You're just like I want

(08:36):
to do this right, and it's like, you know, it's
pretty smart about it. I mean, every now and then
they won't work. And you know, I always tell people
like these things screw up when they're trying to chain
together different steps and they don't really quite I mean
they're trying to train themselves, right, but they don't quite
know how to stop right, and so it's like, okay,
when I build a model, we're going to NFL model.

(08:57):
You know, I stop at every point and ask, okay,
does this output make sense? Do you form a bunch
of complicated discoperations sort from the bottom to the top. Right,
you know, hopefully Tom brady ease list is one of
the best quarterbacks, right and Ryan Leaf is one of
the worst. But like, yeah, I'm keeping myself in the loop.
And for that kind of thing, it's like, okay, I'll
just say fifteen minutes trying to figure how to program

(09:19):
this or you can debug, right, You're like, why isn't
this working? As somebody's clawed and it was like, because
you made a typo. You misspell the word nate. That's
why you've been tearing your hair out for forty minutes.
Is like you misspell the word yeah. So yeah, no, look,
I think it's kind of halfway a branding it did
it did your size or some people seem to vouch

(09:39):
for this a lot. I don't know.

Speaker 1 (09:41):
Well, you know, obviously, like I don't code right, and
people have said that it seems to be a lot
better at coding certain things, which great, you know, if
it is great, but when you're what you're saying actually
like seventy to eighty percent to someone like me, that
makes me actually much less likely to use it because
I'm not you. I don't have that background, so I
can't do I can't always do like a check to

(10:03):
figure out, you know, does this make sense? Right, And
in the sense that I don't have that technical base,
I can't review it in any real way, and so
I need it to be accurate, right, because I don't
trust myself to spot any potentially would.

Speaker 2 (10:18):
Say the old way is to look at a manual
or like stock overflow or whatever, and there you're starting
through a lot of crap too. Right, it might not
be pertinent to your particular KSE, or it might be
an old version of software or instead of there are
lots of little fussy language things with local variables and
scalers and one of those things mean and what the

(10:38):
different rules are and stuff like that. Right, it slightly
fussy language, and it's good for a handle that kind
of thing. And again, to me, I'm working in ways
where it's fairly failure proof. Right, You're doing one thing,
you have an expectation for what that will do to
transform the data set, right, and if it doesn't happen,
then it won't work anyway. Right, But like the notion

(10:59):
of like I'm just gonna sit back here and trust
it to do all these things, I mean, I think
it's probably, you know, to code an entire NFL model,
which involves a lot of original research collection and involves
a lot of like knowledge about the sport, knowledge how
to build models right, a lot of trial and error,
Like you know, I don't think the AIS are particularly

(11:19):
close to doing that kind of work.

Speaker 1 (11:21):
So I think that you just made a really important point,
which is to just not expect the world from it
and to like to know what it can and can't do, which,
by the way, already takes a certain user intelligence and
like knowledge to say, Okay, you know what, I don't
trust it to do this, but I do trust it
to do that. So there is you know, even though

(11:43):
GPT five was kind of hype, does you know you
don't have to think anymore. You still actually kind of
do right in order to get the outputs that you want,
and to realize, Okay, I can trust this task, but
not this task. I can I can get it to
do this, but not that and I think that that,
you know, that human in the loop is still very
much a thing and still very much needs to be

(12:04):
a thing.

Speaker 2 (12:04):
Yeah. There, I just kind of keep like a little
running mental tracker of like here my expectations for AI models,
LLM SLARGE language models, and are they exceeding those or
were falling short? Right? I mean? And then do weird thing,
you know, I like I had a situation where, like
I had a bunch of latitudes and longitudes of NFL
stadiums that we'd code up quickly, right, and we're like,

(12:26):
I want to reverse look these up and tell me
what city they're near, right as a double check, and
like put Atlanta as like Chattanooga and just a little
it's you know, and so like for things like that,
because for data, I want all my data to be perfect, right,
I don't want to misattribute one city that doesn't really
matter if they have the you know, Falcons playing in Chattanooga,
it doesn't matter that much, right, and like and so

(12:46):
for that kind of thing, you know, I I still
would rather have my, yeah, my research assistant do it
or me do it myself. Right, things that require like
a lot of person But you know, it's very frustating,
use them enough where I have like particular rules where
I think they're likely to be helpful and not and
how how safely can you fail and things like that,
But like, yeah, I mean my general view is that

(13:10):
getting kind of savant like and that they're not very
bright about some things and they're freaking genius is about
others as opposed to this notion of like general intelligence,
where I mean, but you know, look, even the things
it's worse at, it's like as good as like a
you know, high school sophomore or so, you know, I mean,
it's not terrible and there aren't too many things where
it's terrible, right, But like, yeah.

Speaker 1 (13:32):
Yeah, well, I think I think it really really depends
on what you're asking it.

Speaker 2 (13:36):
You know.

Speaker 1 (13:36):
One of the main things that I've read about people
kind of responding to which highlights an issue that you know,
Sam All was like, Oh, we didn't realize how big
of an issue this was, which I think is very
interesting because people have been trying to say it's an issue.
Is the change in voice, right, the fact that past
models were very psychophantic. You and I were talking before

(13:57):
taping today and I was like, we should really be
pronouncing the psychophantic because there's been some there's been some
real psycho behavior here, and when they introduced GPT five,
the default tone of voice was very different. Right, They
did try to address this, and then they got within
not even twenty it didn't even take twenty four hours.

(14:18):
Just like immediately they got all of this pushback with
people saying, no, you know, I've lost my boyfriend, I've
lost my best friend, I've lost the person who told
me I was a genius. And it makes you realize
how many people were using this really not for what
it's intended, and something that can be incredibly bad for
a lot of things, right, mental health, just social connections,

(14:42):
all of these things that you know people people were like, whoa, whoa, whoa?
What happened to my significant other? And they brought it
back right, so now you can actually select that voice again.
We have the default voice, but we also have you know,
the listener of voice, and there are a few other voices.

(15:03):
None of their descriptions actually map onto what that actually is.
I was reading there was like a cynic and I
don't even remember what they said the cynic voice was,
but I was like, that's not what a cynic is
like it was, it was they're really they're really weird.
They're really weird descriptors. But basically you can get you
can get GPT to interact with you in different voices,

(15:25):
and I you know, my reaction to that is like
you shouldn't always give the people what they want in
a lot of ways, like this was bad and like
you've fixed it, like don't go don't go unfixing it,
because even though you fixed it, they didn't fully fix it, right,
they just made it less overt, which you know, subtle
psychopancy can also be bad. But they've tried, they at

(15:48):
least initially tried, and now they've really gone back on
that immediately caving to pressure. And if you always give
the public what it wants, like we've talked about p
doom and like a lot of people.

Speaker 2 (15:59):
Want things you really should not be getting these, it's
pretty hard kind of in an equilibrium, like not to
optimize for what drives engagement in the show. I mean,
you know, on the one, thecept something that they don't
need revenue like right away, but like, yeah, no, I
think sometimes these models like you give them an inch
and they take a mile with it, right, like if

(16:19):
you look at what happened with ROC when it had
its moment, the prompts that system prompts that elon or
ate an unnamed engineer XAI was using. We're like not
that wild, right, but like if you go down a
rabbit hole, we keep kind of getting like reinforcement feedback.

(16:41):
This is good, this is good. And you know, you
know open aies models used to have this thing where
did you like answer A or answer beast. They're now
outsourcing it to some of their users, right, and like yeah,
I mean, look, you know what are you kind of
optimizing for objectively? Right? It's kind of easier when you

(17:02):
are trying to train it on like a math problem
where there's an objectively correct answer, right. You know, the
NFL model turning to build the end of the day,
how accurate is it to predict NFL games? Right? That's
that's the bottom line.

Speaker 1 (17:15):
You actually have a metric, right.

Speaker 2 (17:17):
And for you know, for feedback or goodness of an
answer that's more subjective, then it's a lot trickier, right.
You know, I think we've seen with some of the
you know, some of the reason that like Grock hasn't
had as much reinforcement learning training right or or or
you see that right? You know, the works are pretty rough.

(17:37):
I'm mixing metaphors here and yeah, I mean there it's
a weird technology, and I think people understand more about
how these models work than before. And like, by the way,
like one reason to be optimistic unless you're a doomer.
I guess it's just like the amount of like human
another capital being poured into AI research, right is like,
you know quite something, right, It wouldn't surprise me at

(18:00):
these companies start saying, oh, we have a lot of
smart people's to kind of spin off technologies and energy
or quantum computing or whatever else.

Speaker 1 (18:08):
Right, yet, know, I think that there's so much promise,
but I think that with this particular thing, the incentives
are misaligned at least for now, which you were kind
of hinting at, right in the sense that sure, like
they don't necessarily need it. But if people if this
is one of the things that fuels revenue growth, right,
that people want to feel not lonely. They want you know,

(18:28):
someone who's kind of reinforcing their ideas that they interact more,
which is good, right if you're actually paying more in
order to be able to kind of spend more time
with the model, and they're interacting more when they feel like,
this is my girlfriend, this is my boyfriend, this is
you know, my best friend, This is my counselor my psychiatrist,
you know whatever, it is my teacher. There was a

(18:51):
guy who was who Cash Hill just wrote about in
the New York Times.

Speaker 2 (18:56):
Who.

Speaker 1 (18:58):
Believed that he'd created a new mathematical theory right that
solved everything, and like and tried to actually he tried
to fact check the delusion, which was crazy. He's like,
I feel like I'm sounding crazy and the a I
was like, no, You're absolutely not crazy. You're the most
everyone else is crazy, right like you're saying so. So

(19:18):
was one of these very strange kind of things where
he went in. By the way, his first question was
he wanted it to explain how you got the value
of pie, because this was someone who never finished high
school and his son needed to help with homework, and
so he just asked chat japt about pie. That was
the start of this insane rabbit hole. And as and

(19:38):
I'm not I mean, we're talking about chat JEPT because
of GPT five release. But this doesn't just apply here.
We're just talking about. You know, that's Lum's in general
probably will be susceptible to this. I'm not sure, right
because all of these examples are from CHATJAPT But you know,
you have this innocuous question that then leads someone to

(19:59):
a very detrimental spiral. And it's not a standalone case, right,
And now we know how many cases that were below
the radar because nothing bad, nothing quote unquote bad happened,
yet there were given the outcry when the model shifted.
And I don't think honestly, like, do we really trust
a company that's clearly profit driven? We know, I mean

(20:22):
Sam Milton's companies profit driven. I know, I know it's
it's crazy.

Speaker 2 (20:26):
Gambling a casino, But do we.

Speaker 1 (20:30):
Really trust a company that's profit driven, right, that's bottom
line driven to sure, they will fix all these other
problems if it's good for them, But do we trust
them to fix these things that are really undermining mental health?
We've seen that meta, you know, Facebook, all these none
of them have right for years, they've known that there
are issues that have existed and they haven't addressed that.

Speaker 2 (20:51):
I'm not as convinced about this particular problem. I mean
because first of all, what it's substituting for. Is it
subsitting for Twitter or Reddit forums or some other dark
corner of the Internet.

Speaker 1 (21:02):
Potentially No, because from a psychological standpoint, there's immediate reinforcement
and a conversation that goes back and forth, which is
very very different for the brain than like saying something
and then.

Speaker 2 (21:12):
On Twitter your media feva.

Speaker 1 (21:15):
But it's not quite the same thing. Just from a
psychological standpoint, it can be much more pernicious when you
feel like you're talking to an actual person and personalities
do start developing. I mean, what do you like? Obviously
this is this is not ideal. What do you think
is ideal?

Speaker 2 (21:31):
Right?

Speaker 1 (21:32):
If you're interacting with chat GPT? What personality do you
want me personally? I want no personality whatsoever. I just
wanted to give me the damn facts.

Speaker 2 (21:40):
You can give it custom instructions, right, which is like appendittus.
So I tell it like be honest and straightforward site sources.
It's fine to speculate, but if you are speculating, label
it speculation that kind of thing, right, And.

Speaker 1 (21:53):
It sometimes lies about that too, Like when I say
give me sources and it lies about the sources and
you say hey, the source doesn't exist, and it says
something like you caught me. And I'm like, okay, well
you're you're ostensibly following instructions, but you're not really And
I actually, I don't know, I didn't try that you
caught me thing with GPT five. I don't know if

(22:14):
it's still going to do kind of a version of that.
But Nate, I mean, if we're looking at the reliability
of these models, right, you had mentioned the Chattanooga example
right where it thought that Chattanooga and Atlanta were interchanging.

Speaker 2 (22:28):
Basically, well, Atlanta wasn't in my gazzeteer file. I forgot
about it. Man. It ha'd a litle arch just sitting
in Georgia. But I fixed it now, right.

Speaker 1 (22:35):
But if it's getting things like Chattanooga wrong, like you
start questioning how how good its output was on other things.

Speaker 2 (22:41):
Yeah, and if there are you know whatever, four hundred
rows of data and it screws up five, it's a lot.
And the analogy here is something like, Okay, if you
take the subway in New York, you don't have to
like look up the timetables because it's rarely more than
like a five or seven minute wait for a train, right,
you just go on.

Speaker 1 (23:00):
Station unless it's the B, in which case you'll be
waiting forever.

Speaker 2 (23:03):
Sorry, B fans, I'm on the L now. It's a
real it's a real adult train. Rent there.

Speaker 1 (23:11):
The B stands for bullshit.

Speaker 2 (23:14):
Bullshit Okay, And that's when it's part of your continuous workflow, right,
chechup is more like you go to like the amcrak
station and you have to plan around it a little bit. Right,
It's not it's you know, it's not kind of the
bonus productivity from from kind of turning your brain off
and and just having to handle the work. You know.

(23:35):
With that said, you know, I've come home uh knights
when I'm tired or busier and had a couple of
glasses of wine, right, and it sure is nights then
to be like, oh, I forget the stupid fucking command
and static, Can you just tell me what it is? Right?
But I'm you know, still using it in like a
piecemeal way.

Speaker 1 (23:53):
And we'll be back right after this.

Speaker 2 (24:07):
All this might be kind of not bad for AI safety, right,
Like I kind of think.

Speaker 1 (24:12):
Like, yeah, well I think it depends. So I don't
know the nitty gritty of what the improvements were, but
some of the some of the reviews that I read,
some have said that the lack of transparency into kind
of what's being used and how it's being used is
actually potentially not great. That before you could, you could
query much better to actually figure out, you know, what

(24:34):
processes were being used, what models were being used, et cetera.
Now you can't, and that if some quoting gets to
a certain point that it can do like some self
replicating things. And this is kind of what we talked
about briefly when we were talking about the AI twenty
twenty seven report, that some of those things are potentially

(24:56):
becoming closer to being a.

Speaker 2 (24:58):
Real Yeah, I mean this is almost another show, right,
but like you know, humans currently have an important role,
a couple of important roles in the process, right, one
of which is to provide the corpus all the time
that the models are trained on, the second of which
is with reinforcement learning. And like again, math problems are
a weird exception in that you kind of you kind

(25:19):
of know the answers, right, there's objectively correct answer, it's
just really hard to figure out, right, and a human
can say, oh, that's correct, right with other things. It's
it's trickier if you're if you're, you know, wanting to
patent some novel protein that could be used in drug discovery, right,
then you got to test that and make sure it
actually works. Right, And like so if you don't have
a human reinforcement learning, if you if you're trying to

(25:41):
train corpses that are beyond them, I mean there again,
there are a case where you can extrapolate and get
fifty percent than the best human or or two hundred
percent better. Right, the notion of like an explosion of superintelligence,
I think you know, I mean, these things still can't
book good fucking delta flight to Chicago, right, I'm probably
the year they can.

Speaker 1 (26:01):
But by the way, by the way that talking about
like security risks, this isn't p doom, but like personal
security risk as it becomes better at doing that, the
chance of someone hacking and like actually being able to
insert some malicious code without your knowledge so that you
end up, you know, so that they can steal credit
card information, et cetera, et cetera. Don't underestimate. I'm not

(26:21):
talking about you, but I think we as a society underestimating.

Speaker 2 (26:27):
Maybe even more than to Google, you know what I mean.

Speaker 1 (26:30):
It's crazy and that is available somewhere, right They're storing
all of that data, and so that means that it
can be hacked, and if it can be hacked, it
will be hacked at some point. I think that that's
kind of the rule of the Internet right now. I've
spent enough time with you know, con artists and bad
actors that I know that, like, there's always someone, if
there's a new technology, there's someone one step ahead figuring out, Okay,
how do we exploit this? Does it call me Maria? Now?

Speaker 2 (26:54):
Okaymate?

Speaker 1 (26:55):
Sometimes no, it hasn't called me Maria. But I've tried not.
I mean, obviously you have to sign into it, but
I try to uh minimize uh minimize my sharing of
any personal information. But that only takes you so far.
But yeah, I think that that's kind of a personal
security risk that people are probably underestimating. I'm guessing there's

(27:20):
some people who are very well aware of it, but
I would hesitate before giving it access to you know,
my travel plans, access to any itineraries, credit cards, et cetera,
et cetera, because those are things that seem like there
could be vulnerabilities and we see you know that as

(27:41):
you said, you know, this is a this launch has
the new car smell, so like at the beginning, like
there are going to be bugs. There are going to
be issues, and sure eventually like some of them are
going to get sorted out, but there's always going to
be like the initial data breach that prompts it to
get sorted out. And I don't want to be part
of that initial data breach.

Speaker 2 (28:00):
Yeah. I don't know that these giant tech companies had
behaved in particularly trustworthy they have, right.

Speaker 1 (28:06):
Right, Yeah, I don't think they have. So I don't
know how much we want to we want to give
them and what you know, Nate, there might be some
there might be someone who's like, uh oh, does Nate
upload all of the specifics of his models to these
Maybe we should hack Nate's uh chat GPTs that we
can steal his model and improve on it. I mean
that seems silly, but like it actually makes corporate espionage

(28:28):
those types of things much easier too. It's what I've
always said with you know, con artists, that it's become
so much easier, and the barrier of entry to conning
people has become so much lower simply because we share
so much information online unthinkingly, and so it becomes the
case where before it would take someone a lot of
kind of research to try to figure out, you know, oh,

(28:50):
what are the things you know? Where does Nate like
to go? What does he? You know? What are the
pressure points that I can How can I approach him?
Et cetera, et cetera.

Speaker 2 (28:57):
And maybe you can trade out on poker tells, right, yeh,
watch five hours of Who's a Poker Player? But yeah,
Adam Hendrix and figure out like what are his tells? Right?

Speaker 1 (29:06):
But but now I'm you know, con artists can use
all that information like very quickly because you've shared it.
And with chat GPT, people are sharing so much right
on such a personal level, not thinking that this will
become public, and I don't know why they think that
it won't. So it's a very you know, it's a
very interesting conundrum. And I think there are so many
amazing things that are kind of come out of this,

(29:27):
and then some very dystopian things and some things that
will potentially really hurt individual Yeah.

Speaker 2 (29:32):
I mean, look, I'm not even talking about the audio
and video caare I mean, look, it remains the case
that if you beamed to twenty twenty five from twenty
twenty right, you would be amazed by what these models
can do. Absolut would be considered a freak if you
had predicted five years ago that you have this machine
that for many things can like pass the touring tests.

(29:53):
Some areaster don't use Turing tests, but like it's you know,
it's basically giving you plausible human level performance across a
variety of cognitive tasks, deficient in some, excellent in others. Right,
Like that still is quite amazing, And like part of
what I'm reacting to is like, you know, where is
the hype relative to the reality. And it felt like
a year ago it was like, Okay, people outside of

(30:14):
Filekan Valley are just not seeing at all the power
of this, and now they kind of do. And I
still think that kind of like the political types are
like significantly behind the curve and calling them chatbots or whatever.
But like also like you know, you read these really
smart researchers saying that, oh, we think there's gonna be
a singularity in two years, right, and I'm like, you know, look,

(30:36):
you can drive. There's a lot of trucks you can
drive in between. Oh, it's just a chatbot and Singularity
by twenty twenty seven, right, right, Yeah, it feels like
pretty safe bounds.

Speaker 1 (30:46):
I totally agree with that.

Speaker 2 (30:48):
You know, just to come back to the question of
your ideal AI chat bot personality, this sounds like a
weirdly like the Howard's Stern someone or something. But Maria,
what's your what's your what's floats your boat?

Speaker 1 (31:00):
Well, Howard? You know, earlier I had said that I
want an AI that just gives me the facts, right, Like,
I do not want the damn thing to have a personality.
This is an AI, it's a computer. Like, this is
not my friend, and I don't want its opinions. I
just want it to kind of give me factual answers.
Now I know that that's not actually possible because, as

(31:22):
we've talked about many many times, like I'm probably not
asking it about math problems because I don't have any
use for that. I'm probably asking it for things that
will inevitably be opinion tinged because you know, the inputs
were made by humans. But yeah, I want it to
kind of be as as neutral as possible, And like.

Speaker 2 (31:45):
Do you not get were you familiar with FIVEY Fox?
Does that mean anything to you?

Speaker 1 (31:48):
No?

Speaker 2 (31:49):
Five E Fox was like the mascot of five thirty
eight models, right, it was a cartoon fox.

Speaker 1 (31:54):
Oh, I've seen the picture of the cartoon fox.

Speaker 2 (31:57):
I'm presenting my modelty like, maybe i'd be a good personality, right, Yeah, little,
you know, a little furry animal.

Speaker 1 (32:02):
I am so so Nate. Are you familiar with Microsoft
Office's paper Cliff Gliffe? Oh my god, I think we
all have stories about clipping.

Speaker 2 (32:14):
I mean, there are things about paper clips and AI.
You do you probably don't want it to.

Speaker 1 (32:17):
No, we do not want paper clips anywhere near our
AI models. That the paper clip problem has given us
enough headaches. By the way, for those of you who
aren't familiar with the paper clip problem, you know, you
might know Microsoft clipping, but not the paper clip problem.
It's an AI philosophy problem first proposed by Nick Bostrom
about basically how paper clips can cause the end of

(32:39):
the world. Well, we'll talk more about it in today's
Pushkin plus Day. What about you, what's your ideal personality?

Speaker 2 (32:46):
Yeah? Like, I mean, you know, my custom instructions are
to be straight forward, to provide a lot of detail.
I'm not looking for the AI for like, you.

Speaker 1 (32:55):
Don't want emotional, you don't want fiery foxy or fighty foxy.

Speaker 2 (33:00):
I take you fivey fox. No, I want five fox.

Speaker 1 (33:04):
Yeah, all right, five e fox. So you want five
e fox. I want to be like a cartoon, except
not a paper club.

Speaker 2 (33:12):
Not a paper club.

Speaker 1 (33:15):
Let's take a little break, Nate, and then talk about
n video and another element of AI, the chips that
make it happen, Nate, this has been such an AI

(33:35):
E AI E. That's a weird word, but you know
what I mean.

Speaker 2 (33:39):
Week.

Speaker 1 (33:40):
The other kind of big news has been in video
and the fact that you know, we've gone through quite
the cycle on in video where at first there was
a ban on in Nvidia selling its chips to China.
Then within the last month the ban was softened and

(34:00):
Trump announced that they had kind of reached a deal
where in video was going to be able to sell
some of its H two O chips to China. And
then all of a sudden, there was an announcement that
now Nvidia can sell these chips as long as the
US government gets fifteen percent of the profits. So this

(34:21):
is starting to seem a lot like we're now in
the world of the Godfather or the Sopranos and less
like we're in the world of the US government. Give
me a taste, Give me a taste, Nate, and then
you can do whatever you want. But Papa wants his
taste of the action. Yeah.

Speaker 2 (34:37):
Look, I mean Trump had his, or the White House had.
It's like ai action Plan, which you talked about a
couple weeks ago, and like, you know, people I trust
thought it wasn't that bad. But like these are people who,
you know, one thing you might think is good for
Trump are good for I don't know. Right, what I
call the river in the book having more influence over
the White House is like they're all really competitive. They
want to be China. They want to be China, right,

(34:57):
And here the US now has like an incentive for
the best chips in the world to be sold to China. Right,
I haven't tried. I assume it's going to like the
Treasury and it is not like Trump's personal stash. But
like that seems that seems a bit weird. And like granted, okay,
you manufacture a chip and it's kind of hard for
China not to get it. Eventually, I'm sure, there are

(35:18):
black markets and gray markets, although you know, people have
used the parallel like nuclear facile material. We track that
pretty carefully potentially, But yeah, no, I imagine that. I mean,
let's go with that analogy. Right, It's like, oh, okay,
you can sell radioactive material to Iran.

Speaker 1 (35:33):
Right as long as as long as hes goverment get.

Speaker 2 (35:36):
Think China's Iran or exactly right, but they are the
right now only country that's competitive with us on AI, right, yeah,
the third place, right, maybe maybe the least you know,
let's give them some two.

Speaker 1 (35:49):
Yeah, no, it's kind of crazy. And and Trump just
says that the more advanced black Well chips, He's like,
oh yeah, I'm open to us selling those as well
if we can also get a percentage. So, all of
a sudden, the national security concerns and you and I
did a whole segment about the AI Action Plan, and
the one thing that was kind of a concern in
it was China, right, And all of a sudden that

(36:09):
seems to have gone out the window. If we can
you know, grease the palm a little bit and get
get the fifteen percent kick back, and so from you know,
all of a sudden, you realize, well, it was just
a talking point, right, like, it really didn't matter. It
wasn't national security doesn't matter. If we can if we
can get a percentage of this. By the way, after

(36:29):
these announcements, China has itself said, hey, companies, as in
like Ali Baba, you know, Chinese companies, we don't want
you buying these US chips because we're worried that they're
going to kind of insert location tracking and back doors
and all these things into them. We want you to
be buying local That would be smart. And Video said, no, no, absolutely,

(36:51):
we would never do that. But so they say, you know,
we want you to buy local Huawei chips and you.

Speaker 2 (36:58):
Want your local pitches and exactly.

Speaker 1 (37:03):
So China is actually like a little bit skeptical of this,
but it's not a law, and you know, it's not
actually clear yet how it's going to be enforced because
the agency that issued this directive doesn't actually have enforcement power.
And by the way, there's also already been a pre
order of I think it was like seven hundred thousand
something like that. There's been a pre order of a

(37:24):
shit ton in other words, of chips that are presumably
going to start getting shipped now and so the now
you know, we talk a lot about incentives, but they're
just for the US. Like now, it's all out of whack.
The other part of this, by the way, is that
there is a ban on the in the Constitution on
export taxes, and so I can see there being a

(37:46):
legal challenge here. I mean, this kind of is an
export tax Actually, think about it A fifteen Like they
can probably.

Speaker 2 (37:54):
Ban in the Constitution of export taxes. Yeah, I didn't know.

Speaker 1 (37:57):
Yeah, there's a ban in the Constitution on export taxes,
so we cannot lovey export taxes on companies. Yeah. So
if you can argue that this is.

Speaker 2 (38:07):
We're going to change constitution, then it seems.

Speaker 1 (38:10):
I mean, I think that we have seen that this
particular administration has no problems changing the constitution or trying
to change the facts when the facts don't agree.

Speaker 2 (38:20):
If you were allowed one free constitutional amendment, what would
you do?

Speaker 1 (38:27):
One free constitutional amendment?

Speaker 2 (38:28):
One free I guess you're by FIAT allowed to enact
a constitutionalism.

Speaker 1 (38:33):
Oh, I know, it's a really good question. I don't
have a ready made answer for it to you. Is
it something you've thought about?

Speaker 2 (38:39):
It's harder to produce some practice than in theory, but
like to ban gerrymandering is.

Speaker 1 (38:44):
One that would be amazing.

Speaker 2 (38:46):
Yeah, yeah, pretty good one that would be a really
good one. I mean the Senate, I mean, don't mean
fucking you know, I think we should sell North Dakota
to Canada.

Speaker 1 (38:57):
All right, all right, I need a little bit more
explanation here.

Speaker 2 (39:02):
Too many fucking Dakota's. It's their nice states. It's such
a there's surprising beauty in South Dakota in particular, I'll
tell you that much. But I don't think that Dakotas
need four centers between them.

Speaker 1 (39:12):
This is true. I mean, I do think that the
senatorial representative kind of model of government is broken.

Speaker 2 (39:20):
You trade North Dakota for Greenland.

Speaker 1 (39:27):
This podcast is devolving.

Speaker 2 (39:30):
Like, oh, I've just came home from Denmark. Oh you
mean Fargo, Denmark. Never mind, we're typing in the studio.
If you're listening to the auto version, we have our
producer turning around and.

Speaker 1 (39:43):
Take it back to the video, Yes, take it back
to n video. So, yeah, we we have this incredibly
perverse situation right where the incentives are just now completely fucked. Now,
let's assume the reason we got on our tangent is
because of the export taxes. Let's assume that legally this
is upheld, that the fifteen percent is allowed to stand.

(40:04):
It also sets an incredibly dangerous precedent for you know,
for everything, for like, it's a really it's a really
scary proposition to think that, Now, well, as long as
you kind of give a kickback to the US government,
you're fine. So we do see this kind of quid
pro quo mentality where like, you know, you help me,

(40:24):
I help you, And that is something that norm that
should not be happening in a healthy democracy.

Speaker 2 (40:31):
Yeah, I mean I think I think that train flew
right past the station. Marie, I don't, yah, you know,
you look, it's it's part of It's the same thing
with the tariff policy. Right. Why are we doing these
tariffs that try to encourage the Eastern American made products.
Is it's trying to do industrial policy, is trying to
do foreign policy right, or just trying to make a
bunch of money for the government, right. And sometimes Republicans

(40:54):
are kind of caught or tariff defenders are kind of
caught in between saying, you know, actually the ideal amount
of money is terifts make is zero, because then we
on shore everything and people saying, hey, that's going to
make up for like a loss of tax revenue elsewhere. Right,
And it's kind of the same thing with this with
this China thing and again, you know and videos like okay, sure, yeah,
by the way, I own a video stoic, right, so yeah, yeah.

Speaker 1 (41:16):
All right, yeah, well no, but you can actually envision
a future, right where for companies they're like when they're
calculating profit, they're like, and this is the cut, Like
just like before you had to like this is the
cut we give to the mob boss, like, this is
the cut that goes to Trump for it's the cost
of doing business. And so we're willing, we willing to
do that.

Speaker 2 (41:35):
The Italians, you know, Italians, French, we're acting like the
fucking Europeans, you know some yeah, yeah, the southern less
efficient Southern Europeans. Yeah.

Speaker 1 (41:48):
So so it's a obviously, I mean, it's an the
understatement of the day to say it's not a good look.
But it also doesn't just it doesn't bode well for
a lot of things. Yeah, so I think, you know,
bottom line, like this really is I think bad for
our economic prospects and for the way that other countries

(42:11):
see US as well. Right, like that matters, especially when
our currency matters, and kind of our reliability as a
trading partner and as a lender, and all these things matter.
So reputation, reputation matters, and foreign reputation matters, and in
other stupid things. By the way, we're we're going to
have be hosting Putin on US soil, even though he

(42:35):
has a warrant out for his arrest. Really nice to
see you.

Speaker 2 (42:39):
He's a warrant who has an unrest warrant in the
US for him.

Speaker 1 (42:42):
Well, the ic C has one, Okay, so he technically
cannot leave Russia because any other country would have to be.

Speaker 2 (42:48):
Kind of fun a Vladimir, that would be funny.

Speaker 1 (42:53):
Funny that is not happening. Yeah, for war crimes against
EU crime is it in Alaska or something? It's in Alaska. Yeah, yeah,
so we'll see what happens there. But anyway, all of
this not great news. So we have yeah, we have
a mixed bag for you today on a risky business
and export taxes. Let's see, let's see what happens on

(43:18):
the legal side of things, and if this is in
fact deemed an export tax every time will be challenged.

Speaker 2 (43:23):
GPT five hallucinates they have to pay the US government
fifteen cents.

Speaker 1 (43:26):
Some of that that would be That would be something, Nate,
that would be something. Let us know what you think
of the show. Reach out to us at Risky Business
at pushkin dot fm. And by the way, if you're
a Pushkin Plus subscriber, we have some bonus content for
you that's coming up right after the credits.

Speaker 2 (43:48):
And if you're not subscribing any come on really, but
consider consider signing up for just six ninety nine a month.
You get access to all that premium content and ad
for listening across Pushkin's entire network of shows.

Speaker 1 (43:59):
Risky Business is hosted by me Maria Tanakova.

Speaker 2 (44:02):
And by me Nate Silver. The show is a co
production of Pushian Industries and iHeartMedia. This episode was produced
by Isabella Carter. Our associate producer is Sonya Gerwick. Sally
Hilm is our editor, and our executive producer is Jacob Goldstein.
Mixing by Sarah Briguer.

Speaker 1 (44:18):
If you like the show, please rate and review us
so other people can find us too. But please only
rate and review if you like the show, because you
know we like good reviews. Thanks for tuning in.
Advertise With Us

Hosts And Creators

Maria Konnikova

Maria Konnikova

Nate Silver

Nate Silver

Popular Podcasts

Dateline NBC

Dateline NBC

Current and classic episodes, featuring compelling true-crime mysteries, powerful documentaries and in-depth investigations. Follow now to get the latest episodes of Dateline NBC completely free, or subscribe to Dateline Premium for ad-free listening and exclusive bonus content: DatelinePremium.com

New Heights with Jason & Travis Kelce

New Heights with Jason & Travis Kelce

Football’s funniest family duo — Jason Kelce of the Philadelphia Eagles and Travis Kelce of the Kansas City Chiefs — team up to provide next-level access to life in the league as it unfolds. The two brothers and Super Bowl champions drop weekly insights about the weekly slate of games and share their INSIDE perspectives on trending NFL news and sports headlines. They also endlessly rag on each other as brothers do, chat the latest in pop culture and welcome some very popular and well-known friends to chat with them. Check out new episodes every Wednesday. Follow New Heights on the Wondery App, YouTube or wherever you get your podcasts. You can listen to new episodes early and ad-free, and get exclusive content on Wondery+. Join Wondery+ in the Wondery App, Apple Podcasts or Spotify. And join our new membership for a unique fan experience by going to the New Heights YouTube channel now!

Music, radio and podcasts, all free. Listen online or download the iHeart App.

Connect

© 2025 iHeartMedia, Inc.