Episode Transcript
Available transcripts are automatically generated. Complete accuracy is not guaranteed.
Speaker 1 (00:15):
Pushkin. Welcome back to Risky Business, a show about making
better decisions. I'm Maria Kanikova.
Speaker 2 (00:28):
And I'm Nate Silver.
Speaker 1 (00:31):
Nate. We are both in Vegas. So we're going to
start the show off with a little update of how
our first week of the world series has been going,
whether we are still as bright eyed and bushy tailed
as if we were this time last week. And then
we'll talk a little bit of politics, right, Yeah.
Speaker 2 (00:47):
We'll talk about rank choice voting its implications for the
Democratic mayoral primary which is coming up in New York
as well. It's a little bit more abstractifying that in
terms of is this a good system, what does it
reward or punish? And then we'll talk about AI a
little bit, I know, a frequent topic on this show.
I was recently at the Manifest conference in Berkeley, California
before it came out here to Vegas. A little bit
of an update on the AI twenty twenty seven report,
(01:08):
which we talked about on a previous episode, and just
kind of how we're feeling about pe doom and kind
of how the Silicon Valley community is adjusting to the
constant developments in the sector.
Speaker 1 (01:17):
Yeah, I'm looking forward to hearing what you learned at
that conference. But let's start a little closer to home
and talk about pe doom of our bank rolls. So, Wsophie,
how has week one treated unit?
Speaker 2 (01:34):
So? I have fired fifteen bullets. I know it's a
violent metaphor. We're not super PC here, but a bullet
in a poker term is like an entry into a tournament. Right,
So I probably entered ten tournaments, because you can enter
some multiple times. I've had fifteen entrees into those ten tournaments.
Of those fifteen entries, I've had three caches or twenty percent,
(01:54):
which is pretty typical. Right. You're typically cash, depending on
the tournament, twelve to fifty percent of the time. So
I guess a little bit above average.
Speaker 1 (02:01):
Right.
Speaker 2 (02:01):
However, here's why it's hard to make money in the
short run, the medium run in poker tournaments.
Speaker 1 (02:06):
Right, in the short run, the medium run in the
long run.
Speaker 2 (02:09):
Period number one, even if you outlast ninety five percent
of people, you often only make broughly like two times
or maybe more than that. Right, So if you invest
fifteen hundred dollars in a tournament and you make it
to like the ninety fourth percent diile, the prize might
be for to have one hundred so net of the
(02:30):
fifteen hundred you paid to enter, right if you entered
twice and like so like. Basically, it's a very top
heavy structure where you have to get into like the
top two percent, or you have to run good and
cash in the more expensive events. Right, So the cashes
I have are in an eight hundred dollars event, fifteen
hundred dollars event, a eleven hundred dollars event. I also
entered a ten thousand dollars event that like quick Exit
(02:53):
three thousand dollars event quick Exit, right, you have those
Basically either have to run really really deep or have
your cash as being your higher buying events. I've done either.
Therefore I can say, oh, I have cash plenty. I've
been in the running, been in the mix, but like
still down money and that's often very typical.
Speaker 1 (03:07):
How about you, Maria, So I have not played nearly
as many live events as you. I only have one
live cash so that was the first event that I played,
the eight hundred, and then I played the three thousand
dollars freeze out and did not cash that and the
monster stacks. As you know, Nate, that did not go
(03:30):
well for me. That was one of the ones that
you cashed. And then yesterday I played the three thousand dollars.
Basically it was a turbo, and you played that as well,
and neither one of us cashed. Luckily, I only had
to fire one bullet. I was unable to revise, so
that's good. That saved me three thousand. However, what has
kind of saved my last week is online. So I
(03:53):
fired a few of the online events, came in fifty
seventh out of like twelve hundred or so in one
of them for about three thousand dollars. And then last
night came in seventh firing a three hundred and fifteen
dollars event or three hundred and twenty. And that was
very or sweet because seventh was about seven thousand dollars,
which is a great ROI it was only in for
(04:15):
one bullet, right, you pay three hundred and twenty dollars
and you get seven thousand, which is amazing. First, as
you know, would have been a lot more. And I
was very proud of myself for getting that far because
I had an incredibly short stack. Basically the entire time,
I just kind of never had chips to work with,
and I ended up making it quite far. But that
(04:36):
kind of saved my my week a little bit because
live has not been going well, well, it's been going
you know, it's a tiny sample size, right, This is
kind of what you have to understand that you need
to kind of think of things in a bigger picture
as opposed to touch just saying oh, I've only because
I've only played four live events, right, So the fact
(04:59):
that I, yeah, so the fact that I catched.
Speaker 2 (05:01):
Yeah, I don't. I have liked the online I've kind
of sworn off online poker.
Speaker 1 (05:07):
Yeah, but yeah, there are definitely issue with online, but
it's it's helped. It's helped me stay more even than
negative for the first week.
Speaker 2 (05:16):
The default experience at the World Series pokers, you're always
like one or two contingencies away from having like a
really great world series and one or two continues away
from having a really bad world series. Right. You know,
I made it to the very end of day two
of the Monster Stack. You are in the money, right,
And then I made a huge semi bluff if you
(05:37):
don't follow at home, some my bluff is when you
are hoping to get the opponent to full, but if not,
then you have a good draw that will win sometimes.
So I had basically a straight flush draw and it
was a turn, so you now have only one card
to make your hand. And the opponent I thought was
pretty strong, but I thought the odds compelled this play
(05:58):
and I whatever, I'm there, I'm there to run deep,
I'm the money. I went for it. He had a
set of queens. I didn't make my death.
Speaker 1 (06:06):
That was not going to shold. Yes, there's it's funny.
And you say it felt like the opponent was strong,
and I think we've all had that feeling. You know,
we're like just based on the way someone's playing, based
on kind of the dynamics at the hand. You know
the person is strong, but they're strong, and they're strong, right.
A set of queens is a hand that's probably not
going to fold. If you had like naked top hair
(06:29):
something like that, that that might fold. And it's hard
sometimes it's hard to know the difference. But you had
a straight flushtraw. I think that you have to go
for it in that situation. And Nate, you and I
are playing for the win, not for the for them.
In cash, right, we're playing to get chips and to
win this thing. So let's do it, and let's have
a better update for our listeners next week. So, Nate,
(06:50):
let's turn to a totally different topic. Let's go politics
and talk a little bit about ranked choice voting. This
is correct me if I'm wrong, the second election in
New York City where the voting has been changed, so
that now we are in a situation where we have
ranked choice voting instead of the usual first past the
(07:13):
post system. So let's let's talk just a little bit
about what the differences are. I am correct, right, this
is the second auction where this is in play.
Speaker 2 (07:22):
Correct, Yeah, only for the primary. Yes, actually it's not
used in the general election, which could also be chaotic.
We'll probably get to that later on.
Speaker 1 (07:30):
Yep, all right, So normal you know, first past the post,
person who gets the most votes wins, right, and there
can be runoffs if no one gets the requisite majority.
In ranked choice voting, it doesn't quite work the same way.
You get to rank your candidates in order, you know, one, two, three, four, five.
(07:50):
However many ranked choices you have in a given election,
there's every Every election is a little bit different. So
let's let's pretend that we're voting for our favorite podcasters,
and just a random, random example. And let's imagine that,
you know, a bunch of people get lots of votes,
and I when you tally everyone else everyone up, I'm
(08:11):
Maria come in last. So I'm going to be cut
because the last person who comes in with you know,
the fewest votes is going to be cut. But what's
going to happen to my ballots? Well, I'm going to
be cut, but the people who ranked me, who wanted me,
who ranked me first, whoever they ranked after me, is
going to get all of those votes. So imagine that
everyone who voted for me actually put Nate after me. Right,
(08:35):
And when you cut me between the people who voted
for Nate first and the people who voted for Nate
after me, all of a sudden, Nate has lots and
lots of votes. And that's how ranked choice voting works. Right,
Your votes get redistributed.
Speaker 2 (08:48):
Nate.
Speaker 1 (08:49):
Congratulations, I've given you all of my votes and you're
going to be You're going to be zooming at those podcasts.
Speaker 2 (08:56):
My votes got redistributed to Ezra Klein.
Speaker 1 (08:59):
Apparently we tried, we tried, right, but that's actually so
you know, we joke, but there is a lot of
st know, there are pros, there cons and there's also
strategy that comes up with ranked choice voting where people
and this has actually played out in elections in the past,
where people do have coalitions and try to say, okay,
(09:20):
like let's try to band together so that if we're
you know, if one of us is eliminated art bo,
let's get passed to the other person. So that's kind
of ranked choice in a nutshell Nate. The last time
this was used in New York City, it was a
little bit of a disaster.
Speaker 2 (09:34):
Well, it depends on who you would ask, right, So
you had Eric Adams, who's a former cop in the
conservative lane of the Democratic primary, not conserned relative to
the US overall, but they are actually a fair number
of conservatives who are Democrats in New York because there's
no point in being a Republican. So therefore, you know,
like fifteen percent of New York City Democrats voted for Trump,
which is much higher than nationwide. You had Maya Wiley,
(09:57):
who's the progressive candidate, and you had Catherine Garcia, who
was at the time, I believe, the Sanitation commissioner, who
was like the competent technocrat right. So what ranked choice
voting is sort of theoretically supposed to do is produce
a consensus choice where, yeah, the left likes Wiley, the
right likes Adams, and the center again relative to the
(10:19):
New York electorate, likes Garcia. However, you know, in principle,
you could have what's called the condors winner. It's a
French political scientist. I believe somebody condor sit where like,
let's say Garcia, because she's in the middle, that all
her voters. So if she were head to head against Wiley,
she wins because both the moderates and the conservatives prefer her.
(10:42):
If she's head to head against Eric Adams, she wins
because like all the moderates and the progressives prefer her. Right. However,
if the previous round is like if she's stuck in
the middle in third place, if it's you know, forty
percent Atoms, forty percent Wiley again, Conservative progressive, twenty percent Garcia,
then she's eliminated before she has a chance to beat
(11:04):
the other candidates one on one. What actually happened is
she just barely beat Wiley in the penultimate round and
then just barely lost to Adams in the final round. Right,
And so what you almost had happen is that the
candid who what have been the winner had to head
almost got elimina because she didn't have enough like first
choice votes after other votes were redistributed. I know that
(11:26):
sounds kind of crazy, but number one, I want to
emphasize that, Like, you know, one thing that helped her
was that she had agreed with Andrew Yang, who at
one point had been their frontrunner and then faded out.
You know, we're going to mutually endorse one another. Right.
It's a big burden on voters to have to list
five candides.
Speaker 1 (11:43):
Yeah. No, So we've talked a lot on the show
about cognitive load, right, like how many things you can
keep in mind? Like we're busy people. People's attention is
an incredibly finite resource, Like just how much of your
time are you investing in following politics and following all
the candidates you know and trying to get into the
weeds and kind of figure out, Okay, who stands where
(12:06):
who do I believe in? It's much easier to say, oh, okay,
like I like this guy or this woman right then
to say okay, this in order these are my five
preferences and so, and people can get confused as well, right,
if they don't quite understand how ranked choice voting works.
They might think that it's like, oh, I'm giving points, right,
(12:27):
because you know, sometimes you have these voting systems where
you allocate different points to different people, and that's just
basically how it works, right, Like, oh, I gave most
of my points to Nate, I gave second most points
to Maria, like, and everyone's going to get those points
talied up. That's not how ranked choice voting works. But
people sometimes they you know, especially if you've never done
(12:47):
it before, it is it can be confusing. It seems
straightforward ranked five, but it's not because ranked voting systems
do differ and we're not used to it, right, We're
not used to the system where like, I don't know
any other voting in your life where you vote and
like your votes get redistributed to your second choice, right, Like,
that's that's not something that you that's not a situation
(13:09):
you commonly encounter. And so unless like, you know, and
you can think through the strategy of okay, what does
that mean for hand, do I rank candidates? Then you
can get an outcome that you didn't actually mean to happen.
Speaker 2 (13:23):
Yeah, yeah, I know. I mean there are systems. They're alternatives,
like what is called approval voting, where you just list
as many you vote for, as many candidates you want, right,
as many candidates you like you think would be a
good mayor good president, whatever, just check the box. Like
that's a little bit more straightforward and not subject to manipulation.
You know. Also, like, guess which people take the time
(13:44):
to carefully think through their five preferences in New York
you get five listings, right, it's the college educated professional
class who read the New York Times in New York,
right and like and research all the candidates and do
their duty and like. So it kind of like quasi
disenfranchises voters that might be struggling a bit more or
(14:05):
might not be kind of as news consuming. I'm not sure,
like at quite so much, right, you know here, I mean,
so to discuss the problem, we have a good freelance
article silver Bolting about this last week, right, but like, so,
what problem is you have? Like zo Ran Mamdani a
former or current member of the Democratic Socialist of America right,
so quite left ring but has left wing but has
(14:27):
run like a attractive campaign. I mean he has good commercials.
He's good looking, good looking guy. Right, he has some energy.
He's young in a political environment where there's too many
fucking old people. Right, you know, is quite far left. Yeah,
is quite far left. But has proposals like free bus service,
like city run grocery stores in food deserts, a thirty
(14:49):
dollars an hour minimum wager. Thirty five are really high
or something like that. Right, but like it's actually stuff
that like kind of like poll tests pretty well, like
a rent freeze, Like economists hate this idea, but like,
but he has made it a very competitive race since Cuomo.
But there are plenty of voters who think, I don't
like either one. Right, Cuomo hasn't lived in New York
in thirty years. He's a meat too stuff. He's a
(15:09):
carp bagger. He's domineering. We don't want our own Trump.
Speaker 1 (15:12):
He eats Bagel's the wrong way.
Speaker 2 (15:14):
Zoran is too far to the left. People have issues
with him on Israel if they're pro Israel, they is
with him on budgeting and taxes and a lack of experience. Right,
and you have like all these like technocrats like running
in the center lane, Brad Lander, Adrian Adams, Scott Stringer,
Zilar myrie. Right, you have a hedge funder named Whitney Tilson,
(15:35):
I think, and so like. So you have like all
these technocrats running, and what's going to happen is that,
like you know, one of the technocrats are probably finished
in third place and be eliminated, even though in principle
you could have like an Adrian Adams or a Brad
Lander who would win head to head against these two
polarizing candidates.
Speaker 1 (15:59):
And we'll be back right after this.
Speaker 2 (16:13):
Now, the New York Times did something interesting, right, which
is they they said, yeah, the Times declared a year
or so ago that we're near going to endorse in
local races, right.
Speaker 1 (16:25):
Which is bizarre. Right, Well, I don't.
Speaker 2 (16:27):
Think newspapers should make endorsements. I think it's not that Yeah.
Speaker 1 (16:31):
Fine, so if you don't think they should make an
endorsements period, that makes sense. But if they're making endorsements
in national elections, they.
Speaker 2 (16:36):
Do in local rates. I mean even mostly The New
York Times endorse both Amyklobishar and Elizabeth Warren in the
twenty twenty Democratic primary, right, which did neither any favors
I don't think, right, and they do not use ranked
choice voting in most states in the primary this time,
they're like, don't vote for Zoran and rank Cuomo somewhere.
(16:57):
But then like rank like Lander or Tilson first, and
like I put the editorial through like different large language
models AI models, Right, so we're like, yeah, it's tell
you to vote for Brad Lander and then put Cuomo fifth.
So like so if it comes down to Cuomo versus
O run that he will he will prevail, right, but
others were confused, right, these events almost like they kind
(17:20):
of say all these mean things about Cuomo and the
state of vote for him anyway, right, so very very
weird and half assed. And then Lander has crossed endorsed
mam donnie, and so it just kind of like it
adds an element of randomness that to be fair, Cuomo
is almost certain to win the first choice vote. Right,
So in a normal Marri election, because there is kind
(17:42):
of an anti Cuomo anybody but Cuomo coalition which is
pretty broad right and have different flavors of it, right,
and a normal election, Cuomo just wins, goes to the
general election and so forth Right here, Ma'm Donnie has
a decent chant. He's at twenty five percent on polymarket.
Where as you probably know, I'm an advisor. And by
the way, I was into Mam Donnie early. I thought, like,
(18:05):
you know, here's a guy running a pretty energetic campaign
at a moment when like, I don't think New York
is like suddenly having some like progressive spawn. I think,
like people are sick of the establishment, and no one's
a bigger figure of the establishment than Cuomo, right, and
he and he, you know, and he's running a campaign
that feels a little bit different, and I think in
(18:26):
in city elections it doesn't have to be so ideological anyway.
Like I'm to his right, I would I wouldn't be
terrified by his mayorship by any means at all. Right,
but he's he's overperformed, almost underperformed. As a result, he
has a real shot. The general election is a mess, however,
because like all these candidates basically have vehicles that can
run with various minor parties in the general election. Right,
(18:49):
so you'll have already the current incumbent mayor, Eric Adams
is running as an independent. You'll have Republican County. I
think Curtis Silba, who's like the former Guardian Angel die right,
and you could have both like Mam Donnie if he loses,
could run on the working Famili's ticket Quoma if you lose.
I think there's some special vehicle third party that could run.
(19:10):
So rarely usually just a matter of the Democratic primary
mattering we're going to have a competitive general election in
New York too, it looks like and then it's very weird.
Then you get into things where like then it's no
longer ranked choice voting, and you have Cuomo and Adams
have these kind of overlap right, and so you could
see like mom Donnie wim with a plurality, even though
(19:31):
he probably would lose with ranked choice voting. When you
bring the more conservative voters in play, then there's probably
in anybody but Buzo run coalition. So it's it's all
kind of a mess. We like messes in.
Speaker 1 (19:41):
New York, Maria, but this particular mess, it is a
particular mess. And you know something that I've seen that's
kind of interesting because normally, you know, you have people
like endorse candidates and say, you know, this is why
you vote for this one in New York because of
what happened last time, And because people don't quite get
ranked choice all the time, I've been seeing a lot
(20:01):
of people instead of saying, you know, vote for Zoron
or whatever it is, they say, don't rank Cuomo, right, Like,
that's the messaging has shifted to, like, just do not
put him as any of your top five choices, even
if you like would marginally like prefer him to someone else.
(20:22):
And so that's like game theory, game theory, game theorifying.
How do you do that game theorifying it to the
next level right now? Telling people like, don't only rank
this person, do not rank this person anywhere because we
don't want votes to trickle down there. And yeah, so
that's it's kind of an interesting way of figuring out, Okay,
(20:44):
how do we game this. I'm very interested to see
how it all plays out. Obviously I'm not voting in
it because I'm in about a resident, but because you know,
I have been a New York resident for many years
and still still spend a lot of time there. It
is something that is near and dear to my heart.
Speaker 2 (21:03):
And I'm not voting because I am a Rechester. I
think I've told this story on the show I'm much
in Republican in New York because I want to write
you here to build against Donald Trump in twenty sixteen,
were my vote matter more? It's very hard to change
your vote. By the way, we do some breaking news here.
This podcast is not produced in real time. One of
the Maryle candidates, Brad Lander, the city comptroller, was just
arrested outside of an immigration Here. City Comptroller Brad Landers
(21:28):
and New York was arrested and accosted by massed federal
agents at an immigration court in Lower Manhattan. Is the
photo caption I'm reading. There is a photo of him
looking I don't know, dramatically kind of in like a
I guess it's not quite a chokehold or is it.
But he's arrested in a somewhat violent struggle. And this
(21:50):
is important because now Brad Lander is a going to
generate headlines and now he's a focal point. I mean,
now he kind of got the quasi New York Times
endorsement and like now Brad Landers and the news, right,
that's interesting. That could affect things. Yeah, I don't know
if it was deliberate.
Speaker 1 (22:07):
If it's deliberate, it's incredibly smart because you just mentioned
a term that I actually think we should just talk
about a little bit when it comes to rank choice voting,
which is focal points. Right, when you have to rank
five candidates, when you have all this cognitive load, and
it's really difficult to figure out, Okay, where do I
actually stand? Focal points matter, right, So it's actually incredibly important,
(22:29):
like who can command the attention, who can be in
your mind?
Speaker 2 (22:33):
Right?
Speaker 1 (22:33):
We have recency effects, We have those types of biases
that like if someone is on your mind for whatever
reason and has like a slightly positive bent for you
might not even remember that it was because of this arrest.
I mean, obviously now you will because it's so close
to the election, but it will make you much more
likely to be like, oh, you know, I have a
good feeling about this person. I'm going to rank them,
(22:55):
even if it actually has nothing to do with the election. Right, Like,
if you think about it, he didn't suddenly become a
better or a worse candidate just because of this one thing,
But in your mind, something shifts, and all of a
sudden those votes shift as well. Really interesting psychological phenomenon,
which is why if this was actually on purpose, it's
kind of a brilliant strategy. If it wasn't on purpose,
(23:17):
it still might actually work out, what's polymarket saying night,
So he's.
Speaker 2 (23:22):
Up to two percent from like zero percent.
Speaker 1 (23:25):
That's a big that's a big move in just a
few minutes.
Speaker 2 (23:28):
And he's already had again this kind of New York Times.
They kind of whistled out of making an actual endorsement, right,
but they kind of implicitly endorsed him. And now if
it's Machiavelian politics and good job brand Lander, you just
gave yourself a shot.
Speaker 1 (23:49):
Yeah you did, indeed. Yeah, And this is the kind
of stuff that actually ends up mattering because we're human
and you know this is the way the brain works.
Speaker 2 (23:58):
No, Like just you know, name recognition is useful, right,
because you want to be listened somewhere, right, Like like
there's tons ofs around poster stuff in my neighborhood all
over the city. It's like, oh, I know this guy
is and I like, I like Lander or whatever, right,
but like I've heard of this guy, and I feel
I'll be able to listen five people and I've heard
of this guy. I'm not gonna go for somebody I
didn't know, right, So, like so this is this is interesting.
(24:19):
Maybe I don't. Yeah, I'm not quite sure if it's
a three way race now, but like it's it's well, well.
Speaker 1 (24:24):
Yeah, we'll see what happens.
Speaker 2 (24:31):
And we'll be right back after this break.
Speaker 1 (24:41):
So, Nate, a few weeks ago you were at the
Manifest Conference and you had some interesting takeaways about AI
twenty twenty seven, which we've talked about on the show before,
and you know, before we get into all of that,
do you just want to let our listeners know what
exactly the Manifest Conference is because it's kind of a
big deal in the AI community, right.
Speaker 2 (25:03):
Absolutely, just a week ago. I mean it feels like
I've been in Vegas for like three weeks. It's just
been eight days. I think it's by eight.
Speaker 1 (25:12):
Time warps during the World Series of Fucker.
Speaker 2 (25:16):
Yeah. The Manifest Conference is held by a prediction market
called Manifold, which is a market where you trade for
free trade for a world called mana are kind of
like credibility points basically, but people take it very seriously. However,
it's like kind of the de facto and there are
other conferences that I don't go to in the Bay Area.
It's like the de facto grouping of like this overlap
(25:38):
of prediction market nerds plus other adjacent people rationalists, effective
vutru withs. Some listeners will know what all these terms
mean if you read my book. There's a big detailed
schema of like what these different groups are, right, you know,
various nerledgses. Like a poker tournament there that I preicipaid
in some years. You know, it's kind of like effective
(26:00):
altruists meet degenerate gamblers, right, you know it's sponsored by.
Speaker 1 (26:07):
Sorry, I'm just gonna interject, it's not defective altruists and
degenerate gamblers. But I think it was last year's Manifest
conference maybe the year before, where there was a market
on the chances of an orgy happening, and then the
market went to one hundred after the orgy happened. So
so you know, it's people have fun there.
Speaker 2 (26:24):
There's lots of stuff. It's a weird enviigronment. Right. You're
on this actually quite pretty little campus in Berkeley called
light Haven, Right, but like everything kind of takes place
inside light Haven, and like there's like a like a
cuddle party at some point, and there's like a there you.
Speaker 1 (26:42):
Go cuddle party listeners, cuddle party, I have big air
quotes in my.
Speaker 2 (26:49):
There's all this discussion about like, uh, will the world
then because of AI? And is that? Is that good
or bad? Right? It's uh. In other way, it's a
pretty street list cloud by my friends and I the
Substack crew and I Substack is a sponsor, as is
probably Market, both of which I am an investor in.
So I'm over determined to be at this conference. So
you know we have to like sneak sneak some wine in.
(27:12):
We're like, you got to bring on more wine, right,
people are socializing here, you gotta bring out some more wine.
And it's like missed crowd and like, yeah, it's a
very interesting. It's weird kind of going from that environment
to the poker world where like I think I kind
of have so I castually talked about it last week,
but like the Bay Area feels distant to me, right,
(27:34):
I don't mean that in a pejorative way. I mean
like the fact that AI is so top of mind
for people, and that you have people who think that
like we are on the verge of an intelligence explosion
or singularity or artificial superintelligence. Right, these they're all like
slightly different terms, but that's a commonly held belief in
(27:55):
that community, Whereas you adventure outside of that community and
people would think you were fucking crazy to think that
we're gonna be building Dyson spheres or whatever in a
few years. Right now, I feel like I'm a little
bit of a hinge point between the kind of normy
crowd and the AI tech rationalist crowd. Like relative to
people at Manifest, I'm like, Okay, I think we're geting
(28:18):
a little bit ahead of ourselves, which is assuming that,
like we are on the verge of artificial super intelligence, right,
we can discuss that or not, right. And I'm also like,
I'm not so sure, Like what the fucker I mean,
you know, you're transforming all of society. Shouldn't society get
a say in that, even if you think it's good? Right?
Whereas relative to norm people think, okay, you're sleeping on
(28:39):
the fact that we are maybe on the verge of
profound changes even from artificial general intelligence disruptions to how
people live, how people work, certainly how people perceive the world.
And so like, I'm kind of a bridge observation there,
but like, yeah, so we talked about the AI twenty
twenty seven report on this podcast before when it came out,
(28:59):
which is a kind of forecast, a scenario, kind of
somewhere between. They call it like a forecaster or a prediction.
It's like a deterministic prediction conditional on fairly fast AI development,
and kind of presents one scenario where AI decides even
being Seria a threat to its growth and sends out
drones and kills every human sometime I think in the
(29:22):
mid to late twenty thirties, and then one scenario where
we agree to de escalate with China and the world
is profoundly transformed into an AI utopiatopia that some people
might also call a dystopia. But at least we survive
as the AI's concubines or whatever. We are exactly right,
(29:43):
which apparently they think is the happy ending. Right. It
was interesting that, like at that conference though, I went
to an update and they were kind of pushing back
the timelines. Right, They're like things a little bit slower
than we thought, right, you know, one big benchmark, so
ais that are agentic that for can example, search the
web to do tasks like that's still not implemented. I'm
(30:04):
sure it will be at some point, But maybe I
should back up, right, Like I am skeptical of claims.
So the different terms here, right, Right now, we have
what I would call spikey or patchy general intelligence for
like desk jobs, right jobs, And you know, we're not
near having like an AI plumber. We do have AI
(30:26):
automated driving, but that required and it's pretty good in
my opinion, but still only in a handful of cities,
require a lot of very dedicated devotion to that particular task. Right,
That's some important technology, right, But like, so first of all,
we're mostly restricting us to things in the realm of
single manipulation computer, remote desk job type of stuff, right,
(30:47):
and that's already kind of patchy. There are many things
that AI is already much better than humans at. But
like it can't really can't really search the web, It
can't really book a flight or make a restaurant reservation
for you, can't play poker, can't play chess. Right, The
image generation capabilities are pretty good, but you know, show
some AI artis facts, right, But yeah, you know I
(31:08):
think too, you know, with they but is that eventually
AIS will learn to program themselves, right, and you'll have
recursive improvement toward toward not just artificial general intelligence, but
then artificial superintelligence, right where it's discovering things that like
human beings have not discovered and and you know, become
(31:28):
hyper persuasive. And I am skeptical of this for various reasons, right,
One which I just think it's like kind of like
it's kind of underdetermined, right, like their uganizations like, hey, look,
so far we've been on a on a exactly a
logarithmic right, but as you add more compute, you have
these scaling laws laws, quote unquote, right, Whereas you have
more compute, things get more intelligent. And it's been a
(31:53):
fairly predictable trajectory so far, and so far the progress
is remarkable. Right if you had told somebody what chet
rept or Claude or Google Gemini can do, if you
told them that ten years ago, to be fucking amazed
that it can do many of these things quite well,
and some things excellently with natural language, and like they're
very very impressive, right, But like, but kind of when
it comes down to it, the argument is basically just
and I know I don't sound rigorous, so I start
(32:14):
a lot of time thinking about this, they're just like, well,
air linees going up, line will keep going up, and
if line keeps going up, then that inevitably means that
we pass super intelligence, right, And I don't think it's
in nonemble the line keeps going up, because I've seen
a lot of lines that keep going up until they don't.
That's one thing, right. Also, because it's trained on human data,
and so it seems like if it's trained on human
(32:35):
data and has human reinforcement, very importantly, without human reinforcement,
these things get pretty wacky, right, best Ti, I'm very
underwhelmed by the case for like how quickly this wollth
transformed in the physical world and how quickly it achieves
beyond human capabilities, And I feel like a heathen in
the AI community to be skeptical about this. But this
(32:56):
very long winded anecdote and the other reason too, I'm
sorry I'm babbling on for ye I do that sometimes, right, honestly,
Like they're not like really describing political constraints, like you know,
Sam Altman wrote, and Essa, I'm going probably write about
at the newsletter soon where it's like we're gonna have
a singularity, meaning we have this explosion of new technology
and intelligence, right, but it'll all feel pretty normal and
(33:19):
well cove right. I'm like, no, Sam, no, it's not
gonna happen. Right, We're not gonna have a fucking singularity
where we have as much technological progress in three years
as we had in the entire hundreds of thousands of
years of human and proto human species, right where she's like, oh, yeah,
sits look different. No one has a job anymore, right,
And you know, yeah, no, it's not gonna be just
(33:40):
a gentle singularity that doesn't make any fucking sense. And
the reason why it's used to go to conferences like
Manifest is that you realize that, like people are out
on a bit of an island, the Bay Area is
far away from the rest of the country, right, it's
a little insular and so like. And that affects my
views a bit because like, you're not I think sufficiently
(34:00):
counting for like societal political constraints, backlash and so forth.
So I at a great time at the conference, go
to connect with people who have because of course I'm
online before they've never met in person. And but yeah,
that was my experience. I went on for like ten minutes.
Maria so it's your turn to tell me what I'm
full of shit about.
Speaker 1 (34:16):
No, well, well I'm not sure since I was obviously
not at the conference, but I'm actually quite curious. So
the you know, you said that the authors of the
AI twenty twenty seven report were there and that they
have kind of modified their timeline a little bit. Did
you get a chance to kind of get at whether
(34:39):
they're thinking has fundamentally changed as well, or whether they
now just think that, okay, it might be a little
bit slower.
Speaker 2 (34:46):
Well, so for one thing, it's a little bit hard
to know, like are they making a prediction that they
would like bet on or are they trying to like
draw attention to what they view as a plausible scenario.
And there probably is some incentive you're trying to draw
attention to, like to produce more in dramatic I mean,
(35:10):
it's a story they had Scott Alexander, who is one
of my favorite writers. Yeah, writes format for Slate, Star
Codecs now at astrostar Codecs to like just punch up
the writing and it's a beautifully written, interesting document, right,
But like, but there's a little bit of like we
want to get attention. They let me have a chat
with them before and gave me a preview of the report, right,
and like so, like, I think you can probably like
(35:33):
discount a little bit based on the incentive to like
painted more robust picture and not just hedge a bit.
And like, I just think that, like I have become
increasingly unpersuaded why people think we can make this leap
from agi, which I think people will have things people
(35:53):
call artificial general that basically you can do most desk
jobs people, some people say all dusk jobs. Right, Let's
say you can do the large majority of dusk jobs
as well or better than the large majority of human beings. Right,
I think that's a pretty safe assumption, hedging now with
saying the large majority as opposed to all, and then
(36:16):
we're not that far from that. Well, we can debate that, right,
the leap to the physical world and to super intelligence.
I just I've read all these reports, I've talked to
lots of people. I just think they take that too
much for granted, and I think they need to like
treat it as a possibility. But like, I think it's
really underdetermined as far as like a persuasive explanation of that.
(36:37):
For me conditional upon that. Then I think these reports
like underestimate like the societal piece, society resists and pushes back,
I mean like yeah.
Speaker 1 (36:48):
Well not just resisting and pushing back. I mean I
think this kind of like almost dismissive attitude, like what
you were saying with Sam Altman, like yeah, it's going
to happen and we'll cope. Like no, this is going
to cause if it does happen, even to some extent,
there's going to be massive disruption, right, and it's not
(37:09):
going to be peaceful. People do not like their jobs
being taken away. And you know, it's interesting because we
talked on the show before where there were you know,
union workers striking at ports where some of their jobs
we were getting automated and that was basically less work
(37:31):
for them, and so technically it was actually kind of better,
but practically speaking, they were like no way, right, Like
you're taking away jobs, you're taking away hours, Like we
don't want this kind of marginal efficiency from these robotic things.
We want, you know, we want our workers getting money.
And they were effective, right, like they stopped working. The
(37:52):
strikes actually worked, and that's kind of one one very
small piece of it and in a place where, like
the disruption, it's not like they were all going to
be out of jobs. It was just kind of it
was relatively small, probably didn't feel relatively small to them,
but it was a relatively small piece of this larger mosaic.
But if you get those kinds of protests, even with
(38:13):
something like that, imagine the pushback that you're going to
get in different in different areas if that starts becoming.
Speaker 2 (38:21):
So, we just came off a weekend in which we
had mass political protests and no King's protest. Right, you
have maybe most importantly of all, Israel bombing Iran because
of technology. Right. I don't want to take a very
serious situation and make too cute an analogy, but like
here you have technology that like iran suclear program, that
(38:43):
Israel feels could be threatening to its interest or to
global interest, right, and they're bombing the hell out of
Iran right now. Right, So to think that like you're
going to have these profound transformations and the world kind
of stands idly by, right, And the fact that people
kind of in what I call the river, right, the
DC types are often clueless and dismissive about AI. It's
actually a reason to think that the political reaction is
(39:04):
not priced in, right, It's kind of intuitive, but because
eventually it will be people when when they're jobs, if
and when, if and when, if and when their jobs
are threatened, if and when it profoundly rebalances power, if
and when it potentially makes the US and China be
huge hedgemonds relative to the rest of the world. It's
(39:25):
all too superpowers. But if now Europe and the Middle
East and everyone else is behind, right, Like, what's that
going to mean? And so like, yeah, not wrestling, And
they're two ways to read that. One way to read
it is that like, okay, that means they're not pricing
in the constraints society, the legal system, resource constraints. It
(39:46):
just kind of seems incongruous to have this world where
the Middle East is at war. Who knows when China
and Taiwama go to war. Ukraine's still at war, right democracy.
I don't want to be overly dramatic, right, but like,
but you know, Trump presents certain threats for the US. Yeah,
and it's like, you know, global world, it's like kind
of what are we it is that going to be a.
Speaker 1 (40:07):
Smooth so you'll, I mean, no, none of this is smooth.
And I think that you also cannot underestimate humans survival instinct, right,
and they're kind of their their desire not just to survive,
but to like they don't like having things taken away
from them. And you know, at some point like this
is you know, if we if we look at the
(40:29):
kind of big theories histories of the world, like none
none of those transitions were ever smooth, right, and and
people do start rising up and rebelling, and we're seeing
unrest all over the world and as you say, Nate,
you know, climate change, all of these things. We are
not living in a nice, peaceful, like sunny moment in
(40:50):
time where like everything is just going well and people
have become complacent. We were, but now I think that
that complacency is fading, and a lot of that angst
and frustration is boiling up in different ways to the surface.
So I think that this is Yeah, I think that
this is going to be profoundly disruptive. And so we'll
(41:10):
just see how that plays out, right and at what point,
Because I think you made a good point at the
beginning of our conversation about this that San Francisco is
a little bit of an island, and I'm putting San
Francisco here in quotes because letting it stand in for
kind of that community. And it's an echo chamber, right,
because you have these people who are all together, all
(41:32):
in the same place, going to the same events, talking
to each other, reinforcing each other's ideas. And when that happens,
like that's that's never good, right, Like the best things,
best ideas, best businesses, best transitions happen where you have
more outsiders and people who are open to kind of
different views, different backgrounds, you know, just different types of
(41:58):
people coming together. And that's not what's happening in this
little It is a little bit of a bubble. And
so I think that that's just really important to keep
in mind in terms of what we're seeing now and
also what we're likely to see unless something changes, because
you know, unless that bubble pops, like they're thinking, will
still be reinforcing each other for for a while.
Speaker 2 (42:18):
Yeah. Look, it's frustrat because I don't know as much
about AI as sure as people do, right, and they're
kind of COMMUNI accusation. Well, we sent more time thinking
about this. Well, I know more about politics than they
do than any of them do, right, And so like
I'm trying to like, I'm trying to serve as a
bridge between these two kinds of communities, so to speak.
Speaker 1 (42:33):
Yeah, well, let's let's see how it all plays out, Nate.
I'm very curious, you know, once you get invest invested,
once you get invited into the Manifest conference next year,
what happens and how that how that thinking might shift.
Let's definitely, you know, I think this is a topic
that we come back to often for a good reason,
(42:54):
and I think it's something that we have to keep
revisiting to see what's changed or hasn't changed, and how
we feel about that. On that note, let's leave the
world of AI behind and enter our other bubble, Nate,
of poker, and best of luck to the two of
us as we hit the tables this week. Let's let's
(43:15):
try to reclaim some of our poker glory.
Speaker 2 (43:20):
Yeah, there's no poker on the show next week. That
means we sucked.
Speaker 1 (43:26):
Yes, if we don't talk about it, you know how
it's going. Good luck to the two of us night,
see you at the tables. See let us know what
you think of the show. Reach out to us at
Risky Business at pushkin dot fm. And by the way,
(43:47):
if you're a Pushkin Plus subscriber, we have some bonus
content for you that's coming up right after the credits.
Speaker 2 (43:52):
And if you're not subscribing yet, consider signing up for
just six ninety nine a month. What a nice price
you get access to all that premium content and for
listening across Pushkin's entire network of shows.
Speaker 1 (44:04):
Risky Business is hosted by me Maria Kanikova and by
me Nate Silver.
Speaker 2 (44:09):
The show is a co production of Pushkin Industries and iHeartMedia.
This episode was produced by Isabelle Carter. Our associate producer
is Sonia Gerwitz, Sally helm As our editor, and our
executive producer is Jacob Goldstein. Mixing by Sarah Bruger.
Speaker 1 (44:23):
Thanks so much for tuning in.