Episode Transcript
Available transcripts are automatically generated. Complete accuracy is not guaranteed.
Speaker 1 (00:15):
Pushkin. Welcome back to Risky Business, our show about making
better decisions. I'm Maria Kanikova.
Speaker 2 (00:30):
And I'm Nate Silver.
Speaker 1 (00:32):
Today on the show, we're going to be getting into
it on AI. There has been a lot of news
on the AI front in the last few weeks, some
of it coming from the Trump administration and some coming
from overseas with China and Deep Seek.
Speaker 2 (00:48):
Maria thinks AI is overrated. I think AI is properly rated,
as you'll see.
Speaker 1 (00:54):
And then we're going to get into a listener question
about poker and how to beat your local cash can.
Speaker 2 (01:15):
Let's start with our friend artificial intelligence.
Speaker 1 (01:20):
Yeah, so I think we're talking about a few different things, right,
So we have the Trump initiative on AI. So we
have a few things that he did, including things that
he took away, right the executive Order that took away
certain restrictions on AI that had been put in place
by Biden. But then also we have you know, this
(01:43):
big funding initiative into AA by the US government, Stargate,
and then we also have AI coming out of China
Deep Seek, which has freaked everyone the fuck out. I
think that's the scientific way of putting it and has
made markets, crash video stocks, a lot of other stocks Nasdaq.
(02:03):
You know, people have not been happy to see the
success of deep Seek, So there's a lot to talk
about today.
Speaker 2 (02:12):
Where do you want to start, Maria.
Speaker 1 (02:15):
Well, do we want to start with? I think if
it makes sense to start with deep seek because it actually,
you know, I think that the two are very related,
right because we have what the US government is and
is not doing, and one of the reasons that right
now things are freaking out is because of deep Seek.
So I think we can start with that, see what
(02:35):
the implications are, what the responses can be, and then
kind of see what the US has done so far
to see if it's in line with what we think
the correct strategy should be. Yeah.
Speaker 2 (02:44):
Look, until about a week and a half ago, when
people started to notice Deep Seek their model are one.
In particular, the conventional wisdom was that, like America is
way ahead in the AI race. I should say with
respect to large language models, machine learning transformers, right, you know,
driverless cars is a different enterprise, and drones and things
(03:07):
like that. But in terms some LM's large language models
chatcypt like things. Then you know, us probably one, two, three,
four in the rankings and so this, Yeah, this has
interesting geopolitical implications. Yeah.
Speaker 1 (03:23):
And the one of the reasons, just to step back,
that the US was assumed to be ahead was because
the United States had made it more difficult for foreign
governments to acquire like China, especially to acquire the chips
that are necessary to build these large AI models, the resources,
et cetera, et cetera. And so one of the disconcerting
(03:46):
things to the United States was that when Deepseak announced
as results, it also announced that it basically was able
to do this at one tenth of the cost and
resources that other models had used. So this was like
an oh shit, you know, even if we restrict access
to all of these other things, they're still able to
do this. Now, I will say, and other people have
(04:07):
pointed this out, I don't actually know how much we
can trust the numbers and figures, right, this is just
a claim. We don't know what the training materials were,
we don't know what the development costs actually were. We
just know what they claim that they were. So I
think that this is something that we should put an
asterisks next to because it is important to realize, right
that if you can't if you can't actually brace the
(04:30):
information and verify it, and you have to take it
on faith. That's never a good way to take information. Right,
That's one of the things we say over and over
on risky business when you make decisions, try not to
take things on faith. Try to verify, right, That's that's
much better. Yeah.
Speaker 2 (04:47):
So this is basically built by a Chinese hedge fund, right,
which is well which is well capitalized. And you know,
there are a lot of smart computer engineers in China,
and I think they were all working on this project.
So so kind of the last step of training. I mean,
it's it's a little bit like you know Rosie who
(05:08):
Rosie Ruiz?
Speaker 3 (05:09):
I do know who Rosie is is?
Speaker 2 (05:11):
Yeah, if you've covered like fraudsters, right, And I don't
mean to say it's a fraud, but it's like, so
she runs what was the New York Marathon? Right, who?
Like it was a fraud.
Speaker 1 (05:19):
By the way, So Rosie Ruiz actually figures in my
next book on cheating. So Rosie Ruiz one quote unquote
I'm putting this in quotes. The Boston Marathon women's time
with this incredible story was amazing, wonderful. You know, people
loved it, and then it turned out that she took
(05:40):
the subway for a huge portion of the race, and
but she almost got away with it, which is the
really fucked up thing. The reason she got caught was
because the subway car that she happened to get on
had a reporter who was a photographer who had been
covering the marat on and was like, wait, what's going on.
(06:01):
So it actually took them multiple days to figure out
that Rosy Ruiz did not actually win, did not actually
run the time that she shran. By the way, this
was back in nineteen eighty so right the technology was different.
People were not getting tracked as closely as they are
right now.
Speaker 3 (06:18):
But spoiler alert for.
Speaker 1 (06:20):
You know, or big fun thing that I get into
in my book, it's possible even today to pull a
Rosie release.
Speaker 3 (06:28):
So that's for later. But Nate, why are we talking
about Rosie.
Speaker 2 (06:32):
Because it's a little bit like if I don't say
there's any actual fraud, I really don't trust anything coming
out of mainland China. Yeah, but like but yeah, so
it's a little bit like if you run and around
the twenty fourth mile and then run two really good
closing miles, it's still not the same, and it's like
not like an accurate representation to say that just the
(06:52):
cost of this training run when you have all these
resources behind it, and it's kind of like the last step.
But clearly it's more efficient in terms of you know,
when you run a request, put in a query, how
many computer cycles is a burning through? I'm using very
nigh technical language here, right, You can actually host deep seek.
(07:14):
You keep going to call deep stack. There are a
lot of deep stack poker terms. You can host deep
seek at on a desktop. Right. If you want to
actually have it say things about Tanaman Square, for example,
then you need to run your native instance because the
official Chinese hosted web version doesn't like to you know,
doesn't like to talk about certain things.
Speaker 1 (07:33):
I would say, Maria, you don't say that, you don't
say so.
Speaker 2 (07:38):
Then there are a lot of debates though about what's
it mean if it turns out that I mean so.
One category to bat is like does the US lead
lose its lead versus China. That's one kind of whole bucket, right.
Another bucket is like, what's it mean if you now,
like run an AI lab on a desktop and they're
only going to get faster how do we regulate this?
And then there's a whole bunch of economic stuff about,
(07:58):
like you know, what's this mean for the price of
different assets?
Speaker 1 (08:04):
Right?
Speaker 2 (08:04):
So Nvidiah, for example, is the largest manufacturer of semiconductor,
is a Taiwanese company, you know, so if compute is cheaper,
is that good for them or bad for them? It's
not necessarily straightforward.
Speaker 1 (08:19):
Right, Yeah, it's absolutely not straightforward. And also, you know,
one of the one of the other potential things that
happened with deep seek we don't know is a process
of training known as distillations. So that's a slightly more
technical term, but what it actually means is that you
are training your model on the outputs of other models, right,
(08:40):
so you can actually pass the benchmarks and be able
to trade up more quickly because you are using Okay, so.
Speaker 2 (08:45):
Maybe now we are having more of a Rosie Ruize situation.
Speaker 1 (08:49):
And if that happens, then then it is more of
a Rosie Rueze situation and just as a like as
a flag, that's not legal, right, Technically, you're not supposed
to do that. But if you do do it and
you're able to then cover.
Speaker 2 (09:01):
These outputs, the respect for intellectual property varies by countries.
I don't want to be I don't want to bad
mouth the culture for not respecting electric property rights. But yeah,
if you're if you're you know, if you can just
copy off chat, GPT or in for what its weights are,
then that you know, I mean, that's that's a that's
an issue.
Speaker 1 (09:20):
Yeah, it absolutely is. But let's let's flip that actually
around a little bit to talk about some of the
potential positives of this, which you know there we're obviously
seeing you know, potential uh red flags and negatives.
Speaker 3 (09:36):
But what about I mean.
Speaker 1 (09:38):
I actually think that it's not a bad thing that
deep seek is open source, right, so people can try
to figure out, Okay, what is it actually doing, how
is it actually doing?
Speaker 3 (09:48):
It? Isn't that? Isn't that good?
Speaker 1 (09:51):
And I know that, and I know that one of
the things that you know that people don't often like
is open source. But there's like open source can go
either way, right, open source bad if it's US and
China's getting its secrets, but you know, open source good
in other respects. And I think open source is one
of the few ways that we can actually peek inside
(10:11):
the black box. And I would be much more concerned
if it wasn't even open source, right, if we had
no idea how it was trained, if we didn't know
about the funding, if we didn't know it of this
and it was an open source right, if all of
those things were compelled, I'd be more worried. Since it
is open source, I actually think that could potentially be
a good thing in terms of knowledge sharing and trying
(10:32):
to figure out how do we make these processes more efficient?
And Meta has actually created a workforce that is studying
the deep sea processes and figuring out, okay, how can
we use these and do we want to change the
way that we're developing some of our llms, some of
our chat you know, models to mimic their processes so
that we become more efficient. And if this process actually
(10:55):
means that we use fewer resources, that's obviously a net
positive for the environment. However, if it means proliferation of
ais everywhere and it's cheaper than could actually be a
net negative. Which is all to say that this is
really complicated. This is not black and white, and when
you're making these sources of decisions, you have to assign
weights to all of these different outcomes, all of these
(11:16):
different probabilities, and frankly, I don't have the I don't
think anyone has the expertise to do that because it's
such a new world. I certainly don't have the expertise,
but I don't think anyone can see the future clearly
enough to figure out, you know, how do we wait this?
Because there's a lot of uncertainty around this.
Speaker 2 (11:32):
So the hardcore AI concern people, the doomers, tend to
be anti open source. Open AI was founded as open AI,
but no longer kind of abides by that mission, and
I think so I think their view. Let me try
to give you the summary slash kind of Steelman version
(11:53):
of it. Right, these think, okay, this period where AI
is being birth, where artificial general intelligence ranging up to
artificial superintelligence. The first means it can do most things
at a human level or human life well plus right,
and super intelligence means it achieves breakthroughs that no human
can across the broad range of fields. Basically, right, it
(12:17):
thinks this process of giving birth to these models is
going to be very dangerous, and therefore you want to
have it in the hands of as few people as
possible and trusted people as possible. I think at one
point Sam Malton was trusted by this community. You know,
Anthropic is a bunch of people who left open AI
because they thought open a I was moving too fast.
(12:38):
Right and then and then Google Gemini. You know, Google
again had been fairly conservative about how it moved on AI.
Didn't want to you know, it's a huge enterprise, doesn't
want to risk that and other things and kind of
a different culture I would say at Google, then Facebook
or Meta, the reason why their models were open source
(13:00):
is because, like they weren't competitive. This is what people
would say in the AIXPEC community, right, is because they
weren't as good as these, you know, clearly as Anthropic
which has clawed, or open ai or clearly the two
leading ones. And then and then you know, Google Gemini
had some problem with like drawing like woke Nazis and
stuff like that, which I you know, I think it's
(13:23):
not as good as the other two by a fair bit.
But like, but that was that was considered in the
pecking order, right that Microsoft was excuse me, that Meta
too many m's was fourth, and therefore they open source
it to kind of say, okay, well we're gonna like
in some sense it's it's uh, not quite like sabotage.
You're like, well, fuck you, You're not going to get
(13:43):
these same returns to us, and will have applications people
who want to pay for it for free or whatever. Right,
So the fact that it's open source is interesting. I mean,
you know, China clearly is willing to do things, uh
to undermine the American even if so. I mean, the
(14:05):
other big one in the news is of course TikTok,
which is owned by bike Dance. Right, the US Congress
passes a law upheld by the courts. It says you
have to sell to an American company or at least
a country not listed on like the in there's like
literally like an enemy's list of countries, and China is
on it, right, And bike Dan says, we have this
(14:27):
really valuable enterprise, but we'll just turn it off, right,
If you make us sell it, then we'll just turn off. Well, clearly,
I mean obviously it would be a forced sale and
you wouldn't have quite the market clearing price, but like
you know, the value is a lot more than zero,
and they're willing to So you know, there's a little
bit of suspicion that like this is just meant to
like undermine America's lead in AI and not make a
(14:47):
whole lot of profit for China. That's kind of like
addition by subtraction, right, you know, And people say, hey,
these people are idealistic. It's not the same capitalist system there.
And even in the US sometimes the founder aren't that greedy.
They just want to make a really cool product. So
there might be some of that too, But it's in
line with previous Chinese strategy, I suppose.
Speaker 1 (15:08):
Yeah, it's actually interesting that you that you mentioned TikTok
because this is another kind of strategic element of this
that you know, we're we being the US has moved
to band TikTok right now?
Speaker 2 (15:23):
Sorry, what USA?
Speaker 3 (15:27):
You are mooting for the US? Nice and nice? All right?
Speaker 1 (15:33):
Tem U s I is uh was moving to band TikTok.
We have no idea what's going to happen with that now.
But in some ways, you know, and deep seek is
just like la la la la la. Right, There's there's
no movement in that direction, and instead Trump is talking
about tariffs and other things and and trying to kind
of get at it that way. And I think that
(15:55):
it's a very interesting dichotomy where like, if you're worried
about TikTok, like, shouldn't you be worried about the strategic
and security risks of you know, of a company that
runs you know that all of the idol Ai models
are run into right, Like that's Chinese owned, Chinese developed,
Like do you if you're going to be consistent like that?
(16:15):
That seems to be a much greater risk than TikTok
to be perfectly honest.
Speaker 2 (16:21):
And we'll be right back after this break.
Speaker 1 (16:33):
So what do we, like, what can we take from
this and from what's happened in the last week. How
does that mesh with the types of endeavors that that
Trump and team have already put forward. Now, one of
the things, so I started off by saying that they've
rescented the executive order. The executive order that Biden had
(16:57):
on Ai did have to do with open source, right,
and it did have to say it was actually very
skeptical of open source as well, because it wanted, you know,
full reporting, knowing everything that was going on, but let's
keep that information from foreign governments. Now that's out right,
So that's been rescinded. So what's and that's one of
(17:19):
the main issues that we're seeing with deep seek. So
so what do we think about that? What do we
think about the government approach? Is it misguided or is
it on the right track? And if we want to
I think all of us, no matter if you're AAAI ORNAI,
I think everyone wants to minimize pe doom, like I
would hope that no one wants the world to be destroyed.
Speaker 2 (17:40):
So I don't think anybody thought that like this Biden
executive order is going to stop pe doom by itself.
The California law that was vetoed by Gavin Newsom probably
was considered a bigger deal. It was a state law,
but they're all based. If they want to operate in California,
then they were subject to it, right, Like that might
have been a bigger deal. But like the general pattern
here with the California law failing, with this thing being rescinded,
(18:05):
with Sam Altman being lesson less shall we say, concerned
about safety, right, we're going to get this AI raise
and this idea that you could stop it by having
only you know, three operators until you reach some point
where safety was achieved is not going to happen clearly, right,
(18:28):
you know, Like, if you read any other technology, you
would say, because one concern I have about AI or
about this at the newsletter this week. You know, when
you have these very big, powerful companies that have this
lead in computing power and engineering talent, right, usually that's
not how it works, Right, you have the next big thing,
and the next big thing is created by new companies
because the old companies are nostodgy and bloated and it's
(18:50):
not their mission in the first place, and they're not cool,
so don't attract young talent. Right, So to some accept
the fact that, like, you know, it can be disrupted
might lesson the worry about hegemonic concerns over AI and
the kind of paternalistic slash people get very rich off
it concerns about AI. But like, but yeah, I mean, look,
I think we're a long way from achieving artificial superintelligence,
(19:15):
which is the super human capabilities, right, But there are
ways that AIS can be dangerous far short of this.
Speaker 1 (19:21):
Right.
Speaker 2 (19:21):
Take ching you how to like mix chemical compounds or
build a pipe bomb or things like that, right, or
can aid in a bit like sue with cidal thoughts
and like in those use cases are going to be.
Speaker 3 (19:33):
Like they're already happening.
Speaker 1 (19:35):
They're already happening. We know that the guy who blew
up the cyber truck in front of the Trump Hotel
used chat GPT to figure out how to do it,
which was actually probably one of the reasons it wasn't
more destructive because the instructions were not very good. So
I think I think we should be we should be
grateful for the limitations of AI models for now. But yeah, no,
(19:58):
there are I think I think that they're to me.
That's actually one of the one of the more interesting
points is that you know, we're worried about p doo,
about you know, super intelligence, all these things, but I
think that in the shorter term and potentially in the
longer term, we need to be more worried about stupidity,
right about the fact that uh that, like, they aren't
(20:20):
super intelligent, right, and and people can misuse them, and
they can give flawed outputs. And as they become more
and more dominant mainstream used in searches, you know, used
in used in day to day stuff, but also used
higher up for people who you know, want something to
(20:41):
summarize research for them, et cetera, et cetera, that the
problem is going to be kind of much more mundane,
right that it gives you that, it gives you bad information,
it gives you bad instructions, It allides over something. It
doesn't It doesn't quite synthesize something correctly. I think that
that is actually the more pressing problem and is not
(21:04):
P doom boom We're going to blow up, but is
a kind of a smaller P dooms. It's already lowercase,
but like subscript fe doom on a day to day basis,
depending on who uses.
Speaker 2 (21:17):
It n for what. I've kind of flipped on this
a little bit where I I think the hallucinations are
an overrated problem. I think what the models are doing
is is very impressive. They have fewer hallucinations than before.
And you know, I use them a lot just for
like research and problem solving a little bit of programming
(21:38):
and things like that. And I think, you know, the
one I've used the most is O one, which is
the latest public build of chatgypt or open AI. It
just can do higher level shit pretty well. You know,
it also can catch itself in midstream. They put some
routine in where it like well, typically a large language model.
(21:59):
The reason why it seems like it's printing one word
at a time. Is because like that's actually kind of
like how it works, right. It literally it literally kind
of goes to quench, It compresses its whole tex string,
puts it through transformer. Right. But then you know then
kind of it is recurs on how it puts it out.
It doesn't think of it all at once, right. But
now they've trained it to actually go back and check
(22:20):
its output to check for hallucinations, and it catches them
a lot of the time, but not a lot of
the time.
Speaker 3 (22:30):
Right.
Speaker 1 (22:30):
So I actually just did some test runs prior to
taping this so that I could see what was going
on right now. And when it's in my area of expertise,
I catch a lot of inaccuracies, not just hallucinations, but
things that are almost right but kind of misunderstood the point, right,
which which is actually which could be even more problematic.
(22:51):
So I had to do some test runs in psychology
where first of all.
Speaker 3 (22:55):
It did hallucinate.
Speaker 1 (22:55):
It still is making up studies, making up data that
does not actually exist. When I try to, I'm like, oh,
this is really cool. I want to look it up. No,
it doesn't exist. But it also miss like it actually
misinterprets findings and doesn't all always get it right. This
is my expertise, right, I have a PhD at it,
so I can figure this out and be like, you
know what this is.
Speaker 3 (23:16):
It's pretty good, but not.
Speaker 1 (23:18):
Actually like you don't want to rely on this and
this is just outright wrong. But because it's overall it
seems pretty good, I don't catch it when it's not
my area of expertise, and I just assume that it's
pretty good and that pretty is doing a lot of
heavy lifting.
Speaker 2 (23:33):
What about if you read a New York Times or
Washington Post article on poker or sports betting, right or
some feel bad, but.
Speaker 1 (23:42):
It doesn't actually hallucinate. It's not going to really it's
not going to tell me about findings that don't exist
to prove its point.
Speaker 2 (23:48):
It can tell a lot of white lies though, and
misrepresent and be fundamentally dishonest.
Speaker 1 (23:53):
Right, Well, sure you have. You have a problem of
reporting bias always, But there's a problem when you think
something is factual information and it's not right. When I'm
reading an op ed, I know it's an op ed
when I'm reading an article, but when i'm when I
want this for facts.
Speaker 2 (24:09):
But I want I want to know, not the use case.
Though you can't put everything in the microwave oven.
Speaker 1 (24:13):
Of course I want to put one of the things
if I want to learn about a field. So let
me give you another example. I didn't just use psychology.
I asked about I wanted to know about a specific
type of company. I'm not going to give you kind
of the search that was making a specific type of
crypto investment. So I asked, like, what's a list of
(24:33):
companies that's done it? And gave me a list. I
was like, great, now will you tell me what the
specific investments are? And it's like, oh, sorry, we don't
actually know, Like we don't know that these companies have
made any investments. We just gave you a list, And
I was like, okay.
Speaker 2 (24:47):
You're using it wrong. You there are a lot of
situations in life where precision isn't that important, right, you know?
I mean, for example, I was as you made no listeners.
I was in Korea and Japan recently, and like something
I've been lazy about. I never taught myself how to
(25:09):
distinguished Japanese, Chinese and Korean characters, and so I like
told to bet I'm on the flight to Tokyo. I'm like, hey,
give me a little very quick summary and give me
a pop quiz and like you learn it in like
ten or fifteen minutes, right, And they're like, you know,
I don't have to like distinguish those characters one hundred accuracy.
But it's like basically pretty good, and like I can
(25:30):
tailor exactly how how that resources geared toward me.
Speaker 1 (25:36):
I'm not saying this is useless. I'm just saying that
it's right now. It is being used for the cases
that I've told you about because it's at the top
of your such when you do a Google search, like
that's the first thing that comes up as the.
Speaker 2 (25:49):
I think it's very bad branding decision by by Google, right.
I do think it's a bad because like and Google's
I'm sorry, it's not as good as opening Eye show anthropic.
It's not it's not sorry Google, and like it undermines
Google's kind of lead in search. And I think Google
has handled this a lot of things in this space
very badly. Although I mean, you know, it was Google
(26:10):
engineers who came up with transformer paper and they and
they and they, like you know, still hire lots of
great pale up but they've kind of become like this
feeder system to like the hip or cooler AI companies,
I think, But like anyway I think people are I
think people are way too hipster about this, Like this
is the most quickly adopted technology in the history of
(26:33):
the world by some mess.
Speaker 1 (26:34):
Absolutely, and I'm not like I actually like, I think
that it has a lot of potential. I want it
to do better, right, Like I like I want them
to fix this shit.
Speaker 3 (26:44):
Right.
Speaker 2 (26:45):
Look, we are poker players, Maria. We should be used
to being able to accept information as shifting your prior
or shifting your view, but like not being definitive, right,
And that's a journalist. We're both journalists too, Like you know,
when a source tells you something, you've vet.
Speaker 1 (26:59):
It absolutely, But it actually adds more work for me
because I have to go through it and try to
figure out what can I rely on, what can't I
rely on. Now, one listener who who has shared his
experience on UH on social media as well, and I
know you wrote you kind of referenced it in your
newsletter this week, Kevin Ruse had emailed me about poker
(27:22):
training and said that he was using chat GPT and
AIS to help him with poker. And I said, don't
do that because it's going to tell you the wrong thing.
And he did it. He still did it and he said, oh,
I want to tournament. It was it was good, it
was helpful. So I actually had chat GPT do some
poker training for me.
Speaker 3 (27:42):
It's not good.
Speaker 1 (27:44):
It gives you incorrect advice if you don't know, if
you're a novice, and if you're using this, you might
get lucky, right, and it were all works out, but
let's go to poker, like, try to use chat GPT
to teach you poker strategy. It is not going to
teach you strategy. Did you did you and the better
you are?
Speaker 3 (28:03):
Yeah?
Speaker 2 (28:04):
Yeah, I just I did the same and I thought
it was pretty good. It was it misses things that
like so I actually did this last night. I mean
it gets you know, I so what do you? What
do you actually?
Speaker 3 (28:15):
But here's the thing, though, are you yellous? You're you're able.
Speaker 1 (28:19):
To distinguish what it's missing and what is getting pretty well.
If you're using this as the tool to train yourself
and you don't have any background knowledge, that is the problem, right,
you need it to be you need it to teach
you correctly. It's much more difficult as someone who started
poker from zero as an adult. Right, let me tell you, Like,
(28:40):
one of the most important things I learned was that
it's much easier to teach someone from zero because I
didn't have any bad habits.
Speaker 3 (28:47):
Right.
Speaker 1 (28:47):
If I had instead learned from chat GPT and those
were kind of the habits and the thought processes that
I acquired, and some of them were just wrong or
didn't teach me how to think correctly through things, I'd
be a really shitty poker player.
Speaker 2 (29:01):
Yeah, so let me give you some example of how
I use chat GPT.
Speaker 1 (29:05):
Right.
Speaker 2 (29:07):
You know, one is kind of as a research assistant,
but like once you already know something about a topic, right,
Like I'm not getting a first brief, but I'm like
quarrying it where I'm saying, Okay, I talked to this person.
Here's a description of how an AI thing works, or
a crypto thing works, or concept and finance works. Right,
will you vet this for me? What critiques might you have? Right?
(29:31):
It's maybe not quite as good as talking to like
an expert, but I find like you often get a
lot of value from that, and again it's not the
last step in the process.
Speaker 1 (29:39):
Right.
Speaker 2 (29:39):
You can also use it for creative inspiration. Give me
ten potential headlines from this. You can use to fill
in missing words, because it thinks in terms of a
big matrix, right, so like what's an analogy that I
can think of? What's this word or concept that I'm missing?
Invent the name for this thing or that thing. You
can use it to kind of squeeze quantitative data out
of qualitative information, like I asked it, for example, to
(30:00):
an identify straft to this again, right, but to like
vet my estimate of how liberal or conservative different eras
in American history we're on a negative to to positive
ten scale. It can make ranking lists of different kinds.
I mean, it's just like there are so many use
cases for it, and like people just want us to
like cheat on papers or use it as a substitute
(30:23):
for like Wikipedia or something, which you're not the best
use cases for it. And if you ask chet ChiPT,
it will tell you that those are not the best
use cases for it.
Speaker 1 (30:30):
Right.
Speaker 2 (30:30):
The queriable nature of it and the fact that it
reorganizes this information in a way that for many purposes
but not all purposes, is much more approachable accessible, is
it's I don't know, it's I think it's a miraculous technology.
I mean, you know you had woken up. If I
had fallen to a coma years in twenty fifteen and
woken up, I missed the whole first Trump administration, I'm like, oh,
(30:55):
Trump's prison, Oh he was already President's surprise then, like
you would be fucking blown away by this shit, right,
and be like, oh my fucking god. Right, just like
passing the Turing test. I mean, there are definitions to
meet it's but whether it's a good test, whether it
actually is, But like it basically is like human esque
intelligence in some ways, inferior in some cases superior over
(31:18):
a large domain of fields. Just the way it can
like parse this very open ended, fuzzy logic of techt strings.
I mean, I just think it's kind of an amazing
It's amazically a robust in some ways, right, sure that
you can misspell things you get, I mean, it's amazingly
robust in a way that like it's hard to think
of other technologies that compare to it exactly, and you know,
(31:43):
and solved using very simple underlying math, right, I mean,
the code for deep seek is something like a few
hundred lines of code long. My fucking election model is
longer than that. Right, It's like it's like it's kind
of a miracle.
Speaker 1 (31:59):
Absolutely absolutely I agree with all of that. Where but
now I think to push it a little bit further. Obviously,
this miracle, as we know, it's coming at a big cost, right,
environmental cost. It also P doom. Right, we've talked about
that potential risk is the miracle of you know, giving
(32:21):
you a good analogy worth it at this cost. That's
I think those are kind.
Speaker 2 (32:26):
Of don't give me this environmental.
Speaker 1 (32:28):
I'm not talking I'm talking about P doom and environmental.
It's not crap, Nate. We talked about this before, like
a lot of energy.
Speaker 2 (32:35):
Okay, look at this. This is something that's supposed to scare me.
Four impress dot org. The energy consumption for training chat
EPT leading model is even more staggering, equated to that
of an American hassole for more than seven hundred years.
So basically, to train this leading model only took seven
hundred households worth like one subdivision of some fucking neighborhood
(32:56):
in Tulsa. Right, it's not very much like don't you
know you undermind the argument for ped doom. We all
die if like I mean this, you know, obviously if
the models get hungrier and hungrier. But by the way,
the deep seek thing should be good news for the environment.
Speaker 1 (33:11):
Well that's why I said, we don't know, because on
the one hand, maybe depending on what the actual resources are.
On the other hand, if it means that every single
person is now running these smaller things, or not every
single person, but if it makes it more likely that
more of these are what's the net impact, right, If
it's one tenth but you actually have one hundred times
more people using it as a result, then it's a
(33:33):
net obviously negative impact instead of net positive. These are
all open questions, right, And like I said, I am
not an AI skeptic. I think it's really cool. I
think there are lots of really interesting things here. I
just think that there are other you know, you can't
also be like a raw raw cheer later none of
this matters. Of course, it does matter. I think all
of these things matter.
Speaker 2 (33:53):
I just think people are over indexing to like I
don't know. I mean, have you taken wes Wimo Maria
and by the way if you're not aware, this is
a self driving car company which is available in San Francisco, Phoenix,
and maybe one or two other places.
Speaker 1 (34:08):
La.
Speaker 2 (34:08):
I think have you taken a Weimo in any.
Speaker 3 (34:10):
Of those places, Maria, I have not name.
Speaker 2 (34:12):
It's fucking blade wrinner. I'm telling you, it's a very
good experience. It's a much smoother ride than i'd say
ninety five percent of ubers. They have like space sage music,
and you feel like you're in the fucking future. And
I would almost guarantee you that driver less cars are
going to be a very popular technology.
Speaker 3 (34:32):
I don't know. I don't know.
Speaker 1 (34:33):
I watched I watched that. I watched that episode of
Silicon Valley where he's in the driver less car and
en step on a on a boat somewhere in the
middle of the ocean. So I'm a little obviously TV
show comedy.
Speaker 3 (34:48):
But but you know, you never you never know how
the experience will will end up.
Speaker 1 (34:53):
But I'm going to San Francisco next week, Nate, so
you know, maybe I'll take my first Weimo.
Speaker 2 (34:57):
Take a Wimo. It's it's it's like a ninety fifth
percentile Uber driver.
Speaker 1 (35:03):
All right, Well, well, on that positive note, Shall we
talk a little bit more poker and switch to a
listener question? Okay, fine, we'll be back right after this.
(35:27):
All right, So we had a poker related listener question
that I think, Nate, you are probably more equipped to
answer in the sense that you play cash home games
and I don't. By the way, I'm really sorry if
you can hear some knock knock noises in the background.
(35:48):
Apparently the apartment above mine has just started construction. It
actually just happened as we started taping this podcast. This
is the first hammering I have heard. But of course
you get to experience it alongside me, because I love
our listeners and I want to share all of my
experiences with them. So Nate, here's the listener question. I
(36:09):
have a neighborhood poker night with my friends. Everyone plays
really loose and passive, lots of calling, not much raising.
How do I win against real amateurs like that? What
are the most common and easy to detect tells by
amateurs like this? So part of this I can answer too, right,
because this happens in tournaments as well. But let's start
with what you think. Since you play in home games,
(36:29):
you know this is something that you find fun and
not an experience that I often have.
Speaker 2 (36:37):
So it depends this comes from listener.
Speaker 1 (36:40):
Hugh.
Speaker 2 (36:40):
H u g h. I guess that's the way you spill, Hugh.
Speaker 3 (36:48):
You does sound British.
Speaker 1 (36:49):
I'm sorry, it's just a name that I automatically associate
with you with being English.
Speaker 2 (36:54):
We have Okay, So what are basics for loose home?
I mean depends on if you're talking about like really
bad players. He seems to be.
Speaker 3 (37:03):
He seems to be, so.
Speaker 2 (37:06):
You know, the basics are you actually don't want to
play like everyone else beating, you know, don't be so
loose and passive. Right, you need particularly out of position
hands that can make the nuts right, so suited hands
particularly you know asex suited, Broadway suited hands ten nine
(37:26):
suited kind of an above right. So number one, hand
selection becomes more more oriented toward strong hands that can
make big you know, straights and flushes and better. That's
part one, right, Number two, you're gonna want to like
increase your bet sizing maybe quite a bit. Theory says
(37:46):
in a cash game that you're supposed to raise to
maybe two and a half x a big blind here
you can go four or five if they're already limpers.
You can raise even more than that. Right, There are
some games where the standard open might be to like
ten x or things like that. But like, let me
maybe let me even back up a little further, right,
I actually have dealt poker games to total rank amateurs
or like literally five to ten people. I have never
(38:07):
played booker before.
Speaker 1 (38:08):
Right.
Speaker 2 (38:11):
The two things that they most routinely get wrong are
number one, they call too much, meaning they call and
play it like a slot machine instead of folding or
raising more. And number two, they don't understand bet sizing.
What is the size of your bet relative to the
size of the pot. Right, If you don't know anything
(38:31):
about poker, no, nothing at all, then just bet half
the size of the pot. Keep track of what's in
the pot and bet half that size. But in general,
don't do so much calling.
Speaker 1 (38:42):
Right.
Speaker 2 (38:42):
If you have a good hand or a good bluff,
or even if you just kind of think other people
are scared, then then do some raising if you think
there'll be then most amateur players are not going to
go nuts. I mean it's a little complicate because like
they might not understand hands strikes, so you might want
to weig absolute strength a little bit more, but like,
but don't be afraid to get the money. And when
you have a good hand, and when other people have
(39:02):
a good representing a good hand, then it gets a
little bit complicated. But then you know, you don't have
to do a whole excess amount of bluff catching. I mean,
those are the basic And then I'd say, like, in general,
in these lose cash games, people are very sticky. Now
we're talking about a slightly higher caliber of players people
(39:22):
have had played before. Right, people are mostly very sticky
pre flop and on the flop, and then they will
start to fold cash can players do like to fold
on turns and rivers sometimes, Right, So that means that,
like you know that can affect your whole strategy for
the whole hand is that you tend not to have
a lot of full equity preflop in on flops, and
then it requires multiple barrels sometimes.
Speaker 3 (39:44):
Yeah.
Speaker 1 (39:45):
I think that as someone who is a tournament player,
there is some advice that I think applies all around,
which basically goes hand in hand with what you said Nate.
Number one, You don't want to follow the tendencies of
the people who are making mistakes, right, So If people
are too loose, you actually want to tighten up. If
(40:05):
people are passive, you want to become more aggressive. I
think that's important. But you should also realize if they're
going to be sticky, then you should just bet huge, right,
Like if you are going to if you'd normally bet
half pot, just bet pot they're going to call anyway, right,
build massive pots with your good hands. This will also
enable you to bluff, right, because they will eventually fold. Now,
(40:28):
something you said, I think this is actually true, not
just of cash games, but in general in tournaments as well.
People do tend to overcall flop and overfold turn. So
that's a great you know, I think building an over
betting strategy into your game in a game like that
is really good, right, And sometimes they'll get very sticky
(40:51):
like I've had. Because the second part of this question
was tells, which I think is just bad. Do not
use tells, even though in games like that people probably
do have tells, especially if you're going to play with
them over and over. It might be a little different,
but I just don't think that's great to rely on.
But when I've relied on tells, I've actually made really
big mistakes. Because I've had I've had a situations where
(41:11):
I'm like, oh, this person really likes their hand, they
must be really strong, and I end up folding. And
they had like ace deucee off suit, but there was
an ace on the board, and they thought that it
was just like the nuts, right, because they completely overvalued
the fact that they had an ace, and so they
were playing it like they had the nuts and they
thought they had the nuts, but they really didn't. So
if people are bad, don't use tells because the strength
(41:34):
their perceived strength of their hand may not actually be
the actual strength of their hands.
Speaker 2 (41:39):
I'm more till it's funny because we had like opposite personnelity.
You're like way more sound theoretically, and I'm kind of
like psychoanalyzing people a little bit more. And no, look,
I think the premise are like two categories of tells
from very inexperienced players that I don't think I're always
(42:00):
hard to distinguish, but require additional context. Right, what is
a really bad actor tell right where they just are
like they watched like poker movies, We're supposed to act
if you're strong and starting every week and they just
really over knew it. In like a comical way, right, yeah,
and then I have seen that, but just sometimes people
are like extremely There's also there's a.
Speaker 1 (42:22):
Lot of hollywooding actually, and I've seen this in uh
from amateur where like if you have the nuts right,
like say you flopped quads right or something like that,
and then they'll just be like the size and then
like I guess I call right like that if someone's
doing that, like holy shit, you're beat.
Speaker 3 (42:44):
Like just there are there are certain situations like.
Speaker 1 (42:47):
That, but I think in general it's it's better to
we haven't played.
Speaker 3 (42:52):
In Hughes game or Hughes game.
Speaker 1 (42:55):
So I think that just sticking to the advice that
that we've given, which is, you know, don't be lose passive. Basically,
you have to tighten up your ranges. You have to
figure out what those ranges are. And you know, the
the other thing is you know your bet sizing is
going to change, because if people are going to be
(43:16):
calling stations, great, exploit it. If people are going to
call pre flop anyway, great, make your sizes bigger. Just
build pots when you have very strong hands.
Speaker 2 (43:25):
I mean the other thing, you know, to close this
cussing on tells I mean, people can also be very honest, right,
Like they don't you know, in games where the stakes
are low relative to people's like net worth, which depends
on people's net worth, right, then they just don't necessarily
take a lot of action to like conceal disappointment with
(43:48):
a bad flop or things like that. You know a
lot of times all I gotta catch my car like
that actually often is more than not honest, more in
cash games and in tournaments. I don't know why, right,
I think tournaments people like are just like playing their
A game a bit more Amit's secret of cash right more?
(44:09):
Playing their A game more often in tournaments at games.
Speaker 1 (44:12):
Yeah, no, I actually I think I think there's something
too that people do tend to be more honest in
cash games. I had a hilarious situation at a higher
stakes cash game where I had raised and I don't
remember if it was small or big blind had defended
and anyway, went, you know, check bet on the flop,
(44:33):
then check and check on the turn and check and no, no,
not check And I was like looking to see like
I had nothing what I wanted to bet on the
river and he just folded. He's like, you definitely have
me beat, because like I've got nothing and I had nothing, right, Like,
I don't think I definitely had a beat, and he
just folded to me, right. I didn't even have to
think about the sizing or whether I was going to
(44:55):
bet or any of it. That would never happen in
a tournament, but it happens in cash games all the time.
Speaker 3 (45:00):
And I still remember this hand.
Speaker 1 (45:01):
I don't play cash very often, so things like that
stand out, but I've seen people do that and then
they try to do it in tournaments. Actually, you can
often spot a cash player a tournament because they will
sometimes fold out of turn. They'll do things that are
just like very honestly communicate that they have no more
interest in.
Speaker 2 (45:17):
This hand and don't be a super knit right, like
to show people over value, especially in cash games, the
last thing they saw, right, So like if you have
like an occasional hand where you get a little out
of line, right, and again, I think you know usually
worth picking your spots carefully, and there's some psychology to that.
(45:38):
And then like if you show, oh, I three bit
five four suited from the button, which might actually be
a perfectly fine near gto three bed occasionally right, if
you turn as straight with that an the other guy folds.
You definitely want to show that hand right. You want
to like you're maintaining your reputation because, like a seat
in a good cash game is a valuable thing and
(45:59):
people absolutely will notice if you're being a knit. Don't
be a knit.
Speaker 3 (46:03):
Yep.
Speaker 2 (46:03):
Have fun, play toward the looser end of your GTO range.
How your GT arrange may actually be pretty tight against
phishing number fold yep, good luck you, good luck you.
Let us know what you think of the show. Reach
out to us at Risky Business at pushkin dot Fm.
(46:26):
Risky Business is hosted by me Maria Kondikova and byby
Nate Silver.
Speaker 1 (46:31):
The show is a co production of Pushkin Industries and iHeartMedia.
This episode was produced by Isabel Carter. Our associate producer
is Gabriel Hunter Chang. Our executive producer is Jacob Goldstein.
Speaker 2 (46:43):
If you like the show, please rate and review us
so other people can find us too. And if you
want to listen to an ad free version, sign up
for Pushkin Plus. For We're six, ten and nine a
month you get access to ad free listening. Thanks for
tuning in.