Episode Transcript
Available transcripts are automatically generated. Complete accuracy is not guaranteed.
Speaker 1 (00:15):
Pushkin. Welcome back to Risky Business, a show about making
better decisions. I'm Maria Kanakova and.
Speaker 2 (00:31):
I'm Nate Silver. Stay in the show. Look if it
looks like I'm taping this in candle lights because all
the power is out in New York City and around
the world because of the government shutdown. No, there has
been a government shut down for more than a week
now as you're hearing this less than new beecause we're
taping this. We're going to talk about how it's evolved,
what the polling says so far, and you know, do
(00:53):
a check in on which side if either is applying
the right strategy.
Speaker 1 (00:58):
And after that, we are going to talk about one
of our pet favorite subjects here at Risky Business, which
is AI and the release of a new AI video
generation app from open Ai called.
Speaker 2 (01:13):
Sora Sora Too, Sora Too.
Speaker 1 (01:15):
Sorry, I can I can never get it. N Yes,
the man the names of open Ai products. But the
segment will be about broader societal concerns that have to
go beyond whether it's called Sora or Sora Too.
Speaker 2 (01:30):
How dystopian are We're going to answer that question for
you on a scale of zero to ten, how far
are down the lines. Are we to a dystopia?
Speaker 1 (01:42):
But before we dive into AI, let's talk a little
bit about the government shutdown. So we are recording this
on Monday, October sixth, you'll be hearing it on Wednesday
the eighth. On the Sex We're still in a government shutdown.
And Nate, we know you're an advisor to Polymarket. It
seems to me that if you look at the odds,
(02:03):
we will probably still be in a shutdown by the
time listeners hear this episode on Wednesday.
Speaker 2 (02:09):
Polymarket the prediction, Sorry, poly Market, the prediction market for everyone.
It's poly October anyway. Yeah, according to Polymarket, there's only
about a three percent chance that will look foolish by
having released this episode when the sub shutdowns already resolved.
In about a seventy five percent chance that it will
be resolved within a week and a half or later.
(02:31):
Twenty five percent chance for the longest shutdown in American history.
Speaker 1 (02:36):
So, Nate, I am curious, what is the longest the
government has ever shut down for?
Speaker 2 (02:41):
It was thirty five days in eighteen twenty nineteen.
Speaker 1 (02:45):
And how did that? How did that play out? Was
the country experiencing any any really negative consequences. Could we
feel the shutdown as a nation, because I don't actually
remember what it felt like, to be perfectly honest.
Speaker 2 (03:00):
Have you noticed any effects of the shutdown so far?
Speaker 1 (03:03):
No, I have not gone. But if I were a
federal employee, I might write because we know that that if.
Speaker 2 (03:09):
You're a federal employee, then you may get furloughed, you
may stop getting a paycheck, and if you're not furloaed,
social Security benefits may be delayed. Right. Staffing at some
agencies like the National Park Service may be limited mildly
to severely. You know all you know, you have financial
sector things about how we're making our treasury payments and
(03:29):
things like that. Anyway, there's a lot of stuff. It's
kind of the grab bag. The president has both the
jury and de facto power to play around with things
a little bit, right. I Mean, there's some question about
whether Trump wants to make Democrats feel the pain or
if rather that would turn around on him. But yeah,
(03:50):
I mean, let's talk about the reasons why nobody expecting
a quick resolution.
Speaker 1 (03:54):
Right, Yeah, So there, I mean, there are two main
things the way that I understand it. Thing number one
healthcare spending, right and or it cuts to healthcare, and
the Democrats are standing for them on that this is
a place where we might get some common ground with Republicans,
because obviously, if your healthcare premiums are about to get
(04:14):
twice as expensive, if you're going to lose healthcare coverage,
et cetera, et cetera, et cetera, you care whether you're
a Republican or a Democrat. So common ground there potentially.
And number two, one of the things that the Democrats
seem to be insisting on is some sort of a
curb on Donald Trump's ability to limit the appropriations stop
(04:38):
appropriations that Congress has already appropriated. Sorry, I'm saying appropriations
way too often that I don't know what other My
vocabulary in this particular case is limited. So they're trying
to curb executive power somewhat so that they know that, Okay,
if we allocate these funds, these funds are actually going
to be used. We want you know that to be
(05:00):
the case. We want to reclaim congressional power of the purse.
Did I kind of summarize it the way that you
would have Nate or Am I getting it a little
bit wrong?
Speaker 2 (05:08):
I think you summarize Chuck Schumer spill that, Yeah, look,
what Democrats with their heart is really into is taking
a stand against the encroachment of all the authority of Congress,
really against authoritarianism. Put that in quotes if you want, right.
But like, if you look at kind of what Democrats
(05:30):
that are arguing for the shutdown the most really care about, right,
They're like, yeah, healthcare would be would be nice, right,
And they are on the right side of public opinion
on that issue. I mean, these cuts are unpopular or
failing to extend the subseas, I should say, right to
the extent that you know, you're almost doing Republicans a
favor if they agree to amend this. Potentially, Chuck Shumer
(05:53):
will say, Republicans are refusing to take our input on
a budget and have this negotiation over healthcare that everyone
wants to have, right, and therefore the shutdown is their fault.
Speaker 1 (06:07):
Right.
Speaker 2 (06:08):
Whereas like for the past month, everyone's been arguing in
the New York Times that you know about Democrats shut
down the government to take a stand against Trump, and
we're not sure what the stand should be about. And
Chuck Schumer settled on healthcare eventually, right, But like for
all types of reasons. I suggested it should be about tariffs.
I guess anyone else like that idea. Right. Some people
(06:30):
just think that, hey, we cannot in good conscious vote
to fund this government. Right. And you've seen some like
seepe to that message, like Santa Chris Murphy, the santer
from Connecticut who loves to be an MSNBC. Right, but
he's saying, yeah, it's not just about health here, let's
be honest. Right. A lot of the Democratic influencers what
do you call the murdy media personnelity, are kind of
saying the same thing and so like, So it's it's
a it's a hard tight rope, yeah, to walk right,
(06:55):
it's kind of.
Speaker 1 (06:55):
A damned if you do, damned if you don't situation
if you think about it. You know, since you referenced
Chuck Schumer, because you remember, last time we talked about
the shutdown, it was averted because Schumer decided that it
was time to you know, compromise a vertist shut down,
and he got pillaraied for it. His approval ratings were horrific.
People are like, oh, we can't believe you caved. Now,
(07:15):
you know, he's like, okay, fine, we're going to stand strong,
we'll shut down the government, and his approval ratings still suck, right.
He now he's getting blamed for the fact that the
government shut down. So it's it is, you know, as
you say, it's a tight it's a hard tightrope, tight
tight rope. It's a hard type rope to walk, and
there seems to be little way to walk it short
(07:38):
of some of sides that are willing to compromise, and
that doesn't seem likely in the in the immediate term.
Speaker 2 (07:45):
No, I mean, first of all, there's some degree of
like posturing it. First. I think it was very predictable
that you'd have at least this amount of shutdown, right,
you know, poles show that more the public blames Republicans
and Democrats right now, right, but like, I'm not sure
that really tells us who's winning the shutdown. Sure, I mean,
we'll just kind of assume as a default that Republicans
(08:06):
are more obstructionists, I think think, right, and so like
Democrats will kind of the they kind of buy about it,
and they say, oh, we are you know, our hand
is forced right where it's like, you know, people can
read the New York Times where you argue about this
and like your ham was not forced. Now what you
could say is like, look, you know what, you got
your majorities in Congress. The majority can change the rules
(08:28):
that it needs to write. We have so many problems
with a the budget and b everything that Donald Trump
is doing that we're not going to fucking support anything
that Donald Trump does. It requires affirmative Democratic votes, right,
so go ahead and break the filibuster. You're insincere about
it anyway, Right, they already keep pairing away at it
as Democrats have two in some cases. Right, So you
have the votes, pass your budget, and we'll see you
(08:50):
at the midterms. But instead there's this like spin that
like I think is designed to like like designed to
win this pole question, like the first Washington Post pole
that comes out says, Okay, who is more blame for
the shutdown? And just that one question because the other
questions aren't necessarily so good for Democrats or Republicans, then
maybe you win that. But like you know, when you're
in a repeated game to ground this in the show
(09:13):
a little bit, then credibility matters. I don't mean so
much with Republicans, right, there's not very much trust there anyway.
But I mean with I mean with voters, yeah, a
little bit, right, Whereas if you had owned the shutdown,
it might have been more unpopular at first. Now look,
part of it is at Schumer's job is probably a
fair bit of risk. In fact, a plurality of Democrats
(09:36):
in a new pupil disapprove of Chuck Schumer have an
unfavorable view of him. Not a majority, but plurality. You know,
he is also very much threatened by AOC or other
primary candidates in twenty twenty eight when he runs in
I guess three years from now. He's also old as fuck.
I mean, he's not old as fuck, He's just old,
not as fuck, just old Nate.
Speaker 1 (09:57):
We have other we have other people for whom we
should reserve the a s.
Speaker 2 (10:00):
Fuck. Yeah, he's just old. He's just old, not old
af But like so, he probably wants to like give
the annoying did I say annoying? He probably wants to give,
like the partisan Democrats a win. Right that we have
flexed our power. This keeps the base happy. We've gotten
(10:20):
some some token or maybe not so token concessions on
healthcare and things like that. Right, this shows it when
we stand and fight that we win right, yeah, big
morale boost. Does it provide much more than morale? I
don't know. Does it help you win the midterms? I
don't know. Right, the people that care about this are
people are gonna vote anyway for the most part. But like,
(10:43):
but if Schumer blows it, right, if he caves too soon,
then he's gonna look like he's weak, and then he
might be out of power. Right. If it goes on
too long, then A everybody kind of gets more egg
on their faces. B he loses control of the mestry
doesn't really have to begin with. And so like, he
would love to have that that you know early, that
(11:05):
October tenth to fourteenth window that probably market says one
and four chance, he would love He would love for
that window to happen. Right, the more it goes on,
and the more you already have influential Democrats saying, you know,
it shouldn't really be about healthcare, come on, right, then
it becomes this big uncertain morass. And look, you know
(11:26):
you already have some swing state Democrats who are worried
about this. Right, John Fetterman is not on board with
a shutdown center from Wyoming. She's not on board with it.
Angus King from Maine. Jared Golden is a representative from
main very in the Trumpy District, Independent minded, right, So, like,
I don't think Democrats feel like this is gonna be
(11:50):
a big win for them, right, but like, but you
know whatever, I mean whatever, have fun, you know, have fun.
It's kind of it's kind of actually not that high
stakes compared to a lot of things are going on,
which is kind of to me why it feels like
a little bit discordant. But they have to make it,
they have to make a move, right, and they like,
they couldn't do nothing or Sumer might lose his job
(12:11):
if he did nothing right, and they don't really have
a great hand to play. But Republicans aren't popular either,
and they're insincere in their own ways, believe me. And
what Democrats are asking for is fundamentally pretty reasonable. I think,
uh yeah, so yeah, so there we are.
Speaker 1 (12:27):
Yeah, No, I think I think this is that was
a good summary. And one other thing that I will
say is, you know, note the language that you've been using,
which is also the language that they're using, which is, oh,
we need to win. You know, we're fighting a war,
et cetera. So if we couch this back in game
theoretic terms. You know, they are making this into more,
you know, of a zero sum contest than it has
(12:49):
to be. Right, it's like a US versus them, like
we we need to win, as opposed to Okay, let's
let's figure out this is just positive some like let's compromise,
let's let's do what's good for the country. That language
has largely disappeared right now, even though at the end
what we will need is some sort of a compromise.
And as you point out, Nate, some of the things
(13:10):
like healthcare, those are going to help Republicans as well
in the midterm elections. If Democrats actually can get that changed,
it'll help the general public, but it will also help
Republicans because a lot of people who will otherwise be
incredibly pissed off will no longer be pissed off. And probably,
as you say, you know, they're the people who already
pay attention to this, and then those who don't. I'm
(13:32):
guessing that a lot of those people who would have
lost health care coverage if we if nothing was done right,
won't even realize that this happened, right. They won't even
realize that there was a change that there was a
compromise that Democrats had anything to do with it. They'll
be like, yeah, see, we knew that Trump would never
take away our health care because if you're not paying
attention to those kinds of behind the scenes machinations, then
(13:57):
you might not even you might not even realize that's
the case. So that'll be a win for Republicans as well,
as you said. And so I think that it's just
very interesting to see how, you know, politics has always
been a fight to some extent, but the extent to
which the language has become kind of zero sum radicalized
as opposed to a more you know, we all serve
(14:18):
the people type of rhetoric, which would be I think
a little bit more appropriate here, but clearly not the
stage at which we find ourselves.
Speaker 2 (14:27):
Yeah, so it's not zero someone if you're an incumbent, right,
because one sure pause block cooming is that people get
really mad at incumbents in general. And so, you know, unfortunately,
given the type of incumbents we have, you know, we're
represented in Congress by this current generation of representatives, and
if there's a pack book on other houses, then they
might lose. Where the split between the parties goes right. Yeah,
(14:49):
but yeah, you know, Poland shows pretty clearly that Democrats
no longer want compromise with Republicans, right, They want they
want to fight. You know, I as always am contrarian
because I thought that the spring was a better time
to shut down than now. You had Elon Musk as
a good, good scapegoat. I think I'm more salient scapegoat
(15:10):
slash you know, relevant excuse because he was fucking with
the budget, right, and I think they were a little
slow to move then and now it's like, I mean,
the fact is that when you're out of shut out
of Congress, shut out of of the Supreme Court, basically right,
you're shout out of the presidency. You're even losing control
(15:33):
over blue academic media institutions that you once controlled. I mean,
you know, Democrats are are number one very angry, right,
and number two feel like they need something, anything that
they can call a win. Yep.
Speaker 1 (15:48):
Yeah, I think that the optics are incredibly important. So yeah,
let's see what happens. Let's see if Chuck Schumer can
hit his magic window or not. Holymarket odds are against it,
but we shall see. On that note, let's take a
quick break and when we come back, we're going to
talk about the new open AI release sore or sorry, Nate,
(16:10):
Sora too. So last week, last Friday, October third, open
a I released a new app. And even if you're
you know, not terminally online, unless you're living under a
(16:30):
rock and you have some form of social media, you
were probably aware that Soora was released, because I don't
know about your news feed, Nate, but my newsfeed briefly
became just like completely Sora with the announcement video, all
these other videos, just people were just going crazy. So
for a good twenty four hours, Sora was my newsfeed.
(16:50):
And this is kind of the next gen video generation
app that they've been working on. And yeah, it's as
of now invite only, but man, it's as of now
still free, even though there are you know, already murmurs
that it's not going to be for for much longer,
because Sam Altman said that he wasn't he wasn't prepared
(17:12):
for how many people would be making those those videos,
which I've seemed very strange, Like, of course, of course
you're you should be prepared for the volume that it's
going to generate at this stage, right, you seem to
be unprepared every single time you release something new.
Speaker 2 (17:27):
Well, that can be a classic tactics in politics too. Absolutely,
Oh my god, we have an overflow capacity here at
this tiny venue that you should pick a bigger venue. Right. Yeah,
it looks good on TV. Yeah, it generates buzz right yep.
Speaker 1 (17:39):
So yeah, let's let's talk about soa. Let's talk about
you know, the good, the bad, the ugly. But first,
just a brief explanation. So, Nate, have you used Sora
yourself yet?
Speaker 2 (17:52):
I am not. I don't know why I was going
to use it in preparation for this program. I don't
have any What would I do with it? Yeah?
Speaker 1 (18:00):
I feel the exactly.
Speaker 2 (18:02):
I mean, it seems it's like, oh, you can generate
in the image, and it's like, okay, I have a
good imagination, you know what I mean? Yeah, Like what
I do with it? Do I feel like the old person?
Speaker 1 (18:10):
I mean no, I mean I actually feel I feel
the same way, which is that it's using a whole
lot of capacity for something that I don't know how
much of this we need. But the idea is that
you give it a text prompt and then it creates
a video from it. And the video is when I
said next, jen A, I really is. It does look
a lot better than AI generated videos did before. You
(18:32):
can even upload your own photograph. I would actually urge
you not to do that because this is more training
material for the apps and you kind of lose a
lot of control over it. Anyway, we can, we can
talk about that. But so you can upload your own image,
you can use historical figures. People have been making videos
(18:52):
of MLK. The most famous one that has been circulated
is sam Altman shoplifting from a target. So yeah, you can.
You can give it lots of prompts and it spits
out the output and as of now it is water
are marked, but the watermarks are already being removed. They're
(19:13):
not very sticky, it turns out. But yeah, so it's
this new frontier of video generation. And even though it
says that they have guardrails about usage, you know, consent,
celebrity usage, political violence, people have already demonstrated that Zura
is capable of creating incredibly realistic images of things like protests,
(19:40):
violence in the streets, you know, things that border closely
on political violence, which it says it doesn't allow, and
some experts in video AI generation and detection. So there
are people who have companies that actually are supposed to,
(20:00):
you know, be very very very accurate at figuring out
is a SAI generated or not? Are saying, hey, this
is really tough, right, Like, some of these are actually
quite good, and it's becoming very difficult to figure out
whether or not this is real or not, which is
scary because once we stop having that capacity, what happens
(20:22):
with video evidence, right, what happens with deep fakes, what
happens with kind of the ability to tell what is
real from what isn't real. We're already We're at the doorstep.
We're almost there, and I think Sarah will probably take
us over the doorstep.
Speaker 2 (20:38):
Yeah. Look, you can have a watermark, or you can
have some type of digital signature embedded, right, or you
can just say, okay, there are artifacts that suggest that
this is artificial, right. You know, I suspect we're probably
a few years away from because there's more going on
with vision than text, you know what I mean? You know,
these AI detectors for like college essays I think are
(21:00):
not very reliable, but like visual artifacts are like harder.
I believe to entirely eradicate potentially, right, you know, just
to be able to manipulate photos for a long time.
I'm not sure it's changed things all all that much.
I don't know. I mean everyone, like people are always
like misinformation, this misinformation that, right, Like I think it's important,
(21:24):
but I think that's something people will adapt to. And
I don't know. I mean, I'm more interested in, like
what this tells us about, like what open AI wants
to be, right, and how will AI become a part
of our life. I suppose you know, this is not
what you would call a particularly high brow product, you
(21:44):
know what I mean?
Speaker 1 (21:45):
No, it is not.
Speaker 2 (21:48):
It's refined and interesting and I'm sure you'll see interesting
art produced with it, and you know, I say that
on ironically and things like that, but like this is
kind of a facebooky product, not to not to demean
our friends at meta, right, but like this is consumer focused,
(22:09):
middle brow, right. They seem not to be terribly concerned
about like you know, the use of copyrighted characters and
whether that falls under fair use is a legal debate, right,
But like you know, I think if they were trying
to go for a more intellectual crowd, they might have
draw more guardbrails over that. But like to me, it
(22:31):
seems like open ai is is turning into a more
legibly normal consumer product technology company, you know what I mean.
Now you can say they're doing that as an interim phase, right,
in order to then raise enough money to take over
the world with AI, right, or build AGI or a
(22:55):
SI artificial superintelligence. But like, this is not quite the
AI timeline that maybe the p Doomers feared exactly, right.
You know, I compare this with the somewhat underwhelming release
of GPT five, right, And we did an episode on
(23:16):
GPT five. What was it like a month or so ago, right,
Like I've remained a little bit underwhelmed by it, I
would say, right, you know, look, I pay two dollars
a month in the promode, use the pro mode, and
and it's better, right, but like you know, it still
feels very limited in capabilities, and it still feels like
(23:38):
I gave to tell you this to the last show.
So my friend has like I think she's like eight
years old or in second grade, right, and you know
I had chet ChiPT so there's this, you know, the
machine that can like solve math Olympia problems. Right, So
she had a Snoopy coloring book right where it's like
you have to like fill in you know, little word
(23:59):
puzzles that are very simple, unscramble of words and like
you know Snoopy and the people are in Charlie Brown,
Lucy whatever lineus, they're in a spaceship. You have to
like connect the cord with the person in the in
the space soup. It's designed for eight year olds. Every
single problem chat ChiPT totally fucked up. It like cheated, right,
(24:20):
that's amazing. Yeah, yeah, you wh when you had Snoopy,
you had you know, you're looking forward and you have
the left half is filled out with an image of
Snoopy in the profile facing the child who's filling out
this cartoon, right, and the right half is blank. So
it is we're just teaching you to mirror Snoopy on
both sides, right. So I'm like, oh, I'm not going
(24:41):
to even give this to chat cheepy Tea. It's too easy, right,
So I photograph Snoopy and give it the puzzle, and
instead it turns Snoopy around ninety degrees. So Snoopy is
now not excuse me, he was facing you. Now he's
in profile, right, you see his snout is facing right,
so like it totally it totally cheated and like it
(25:02):
gas lights you. Right, It's like you know you have
like that, you know you have to think of you like,
find the things that are different in the image, right,
will like make up things that are different in its
version of the image, but weren't different in the original image.
It just gaslights you. You know. By the way, I
don't think GPT five is like a total miss or
or anything. Right, We're getting toward the end of the year.
(25:25):
Some people I think predicted bigger breakthroughs this year in
terms of agentic capabilities, for example, what they'd be able
to do in terms of you know, computer usage and
stuff like that, and like, you know, this has been
as far as like core AI, you know, we have
not been We've been it's been getting better, it's been improving. Right.
This is the year where it's felt like we're not
on an exponential curve, right, It's felt more linear now
(25:48):
that could be some type of punctuated equilibrium where there
needs to be some some new technology, you know. I
think people mostly feel that like maybe merely adding more
compute will produced diminishing returns at some point it might
need some new techniques. Right. And by the way, if
you were at the best Urn of AI, you know
they're private company for now, right, But like I were
(26:09):
investor in Opening, I be fine with this. I mean,
all the big you know, Google, Facebook, et cetera. Companies
are are very profitable and have very you know, rich valuations. Right.
But yeah, it's becoming it's becoming a little Uh. What's
the words slop is the word that some people use
for this type of image generation. Oh?
Speaker 1 (26:30):
Absolutely, I think slop is the right word. And I
think that I would actually go a step further and
say that as opposed to a step towards kind of
AGI or kind of bigger advancements, you know, P doom
et cetera. I phrased that poorly. It makes it seem
like P doom would be an advancement. P doom is bad, right,
(26:50):
that's why it's called P doom. But the advancements that
would potentially lead us to U P doom scenario. This
seems to be basically the only uses I can see
of this with some limited things. I don't want a
straw man and say like there's nothing good that comes up,
But I think that a lot of the kind of
(27:12):
a lot of the downstream effects of this are actually negative. Right,
They're not doing anything to be a net positive value
to society. Instead, it's creating kind of slop. It is,
you know, taking taking up your time with activities that
might dead in some of your brain cells and would
probably be better used doing something else. And it can
(27:33):
create really bad downstream effects if we look at the
types of videos that it is capable of generating. You
know what happens if someone, as they get more realistic, thinks, oh,
there really is violence here, There's really an uprising here, right,
there was really a bomb here. The Democrats did this,
you know, the Republicans did that. I think this can
(27:54):
go on both sides of the party line. So let's
not let's not say one party or the other. What
happens Nate. If someone can get your face and makes
it seem like Nate Silver did something very bad or
Maria konnikovid did something very bad or said something, you
could probably implicate people in crimes they did and commit.
(28:14):
And if you and I had exonerating evidence and we're like, hey,
we were taping the Risky Business podcast at that exact
time look at the video of it. If you can't
distinguish that right from the video that they generated, or
it can be incredibly difficult, time consuming to distinguish it.
It's not as easy as doing it immediately. That could
still upend our lives and kind of lead us into
(28:37):
a very uncomfortable spiral where even if at the end
of the day we are vindicated, it will still have
been you know, very long time, financial resources, emotional cost,
all of these things, and those seem to be kind
of the main uses of video generation of the sort
that or you know, disrupting Hollywood and saying okay, fine, well,
(29:00):
you know, create entire AI movies, which I don't think
you can do at this point with SOA in any
believable way. But that might be kind of one of
the one of the other downstream effects. But I'm more
kind of what worries me, especially as someone you know,
who studies a barent behavior con artists, et cetera, and
(29:20):
who's working on a book about cheating. What worries me
are kind of all of the nefarious uses of this
for scamming, for for conning, for cheating, and you do
have this thing called the liar's dividend that I think
is very prominent in something like Sore, where I mean
Trump even said it himself that basically, you know, if
(29:43):
you don't trust this evidence anymore, then you can just
blame AI for everything. And Trump said and this isn't
a this is a quote. If something happens that's really bad,
maybe I'll have to just blame AI, right, And that's
that's the liar's dividend that like now, liars are actually
the ones who are benefiting cheaters, are the ones that
are benefiting con artists, scammers, They're the ones that are
(30:05):
deriving the most benefit from this. So if your stated
goal has a come he is to make the world
a better place. And all you're doing is a good
on a good day, just generating slop and on a
bad day, creating content that might actually diletoriously affect people's
lives on an individual level, on a broader level, depending
on what we're talking about, then I start to question,
(30:27):
you know, what, what are we even doing? Right, Like,
what's the point of this because chat GPT. You know,
I'm not a fan of GPT five, We've talked about this,
it's still you know, there are things that we don't like,
but they are good uses of it, right and large
language models. I can see like there are really great
ways that this can you know, speed scientific discoveries do
(30:47):
make work easier. There there are lots of really great things.
And what's sore? I'm like, well, you know at this point,
like what are give me the strong man case for
what the amazing uses of this are as opposed to
just the meh or the just dead weight on society
or the actually actively bad society societal effect. And Nate,
(31:11):
I can already see by your face that you know
this is you'll rise to the challenge, even though you
were you started off by by being negative on sour
and saying that you don't even see why you would
why you would want to use it. But let's I'd
actually really like to try to figure out, Okay, what
are some really great like what is the societal good
from this? What is the thing from this that might
(31:33):
actually be you know, medical breakthrough, don't discoveries? I don't know, Like,
let's let's try to figure out, you know, is there
a way to spend this in a more positive light
than the one that I have just recounted, which is yeah,
well exactly. Yeah, No, I mean I haven't mentioned all
(31:53):
the negative stuff. Yeah, please, I mean that's a huge thing. Porn, revenge, porn,
just all all sorts of things that you can do
with with this technology, and with the access to photographs
and the frankly not great protections on image likeness, and
(32:14):
the ability to kind of use things that are copyright protected.
Speaker 2 (32:19):
And and and fanfic y, which which may also have
some commercialization that you have you're on some weird wormhole
on the internet, right, and you have something involving these
characters that's not canon, and and and now you can
create your own movie universe, cinematic universe, right, especially for
(32:41):
fantasy type stuff where maybe the realism is not super
super important, and some of them will be popular, some
of them will be good even, right, They'll be interesting
art produced with this tool. But but yeah, I think
large language models are fascinating. It's partly because I'm like
kind of like a language guy.
Speaker 1 (32:59):
I mean, you know, I totally totally agree.
Speaker 2 (33:02):
But they're also this kind of remarkable miracle technology, you know,
I get. Look, I guess in some sense, like there
is still some like shock and all with these video models, right,
I you know, given current developments in any eye, I'm
not surprised, right, but I would have been surprised five
or ten years ago, I think.
Speaker 1 (33:20):
Right.
Speaker 2 (33:22):
Yeah, look, and you know, and by the way, as
you get more like customized versions of whatever fanfic you're into,
that makes society more like atomized also, right, we have
like fewer and fewer focal points. You know, you'll probably
get some really cool video games out of it. Right,
if you can custom generate unique animations and that, you know,
video games will be cool. It's hard to find the
(33:43):
legitimate quote unquote positive use cases for it, right, Whereas
emage image generation you can see plenty of positives, right,
and certainly language generation. Right. I guess advertising advertising, okay,
that's fine, right, video games? Right, but these are they
are these things that are kind of neutral and benign
ish and then things that are bad, right, And there's
not a lot that's good. Yeah, I don't think.
Speaker 1 (34:07):
No, I don't think so either, did you see? By
the way, Nate, just to show like how important video
gaming data is to open a last year, they offered
five hundred million dollars for a company called Metal, which
lets gamers basically upload videos of them playing these video games,
(34:29):
and the reason that they were willing to offer so
much is because you know you might be able to
then train its models on those videos. There was a
story in a tech site that I read a lot
the information about this if you're interested in more detail,
but I thought that this was a really interesting window
(34:52):
into what open a I might be thinking in terms
of the future of its video apps. So yeah, I
think that it's it's funny you say video games will
be will get better, but it's a they will use
video game video games themselves to be able to prove
It's kind of one of these basically the way that
(35:12):
large language models work, right where you need that feedback,
You need the good stuff to be able to then
generate things on your own, and so you need access
to that data and to that raw data.
Speaker 2 (35:26):
And we'll be right back after this break. Google I
think is the best cop for open AI, right, probably
better than Facebook, but that's the second most obvious, right, Yep,
they both do a lot of different shit, right, Like
(35:47):
Google would not even claim that like everything they do
is search or one degree removed from search. Right, they
have lots of start people, they have lots of capital,
they buy lots of stuff. Maybe they think it will
improve search originally and then and then once you're doing
something else, right, they have you know, electric vehicles and
things like that, and like I kind of predict like
(36:09):
this is what we're seeing, Like I don't know. I mean,
I think Sam Moulton would love to have artificial general
intelligence AGI. You know, the implications for society good or bad. Right,
But like this does not feel to me like a
company that has a lot of competence about its ability
to like develop AGI in a one to three year timeline.
They might have something they call AGI, but like, if
(36:30):
you were on the brink of an intelligence explosion, as
both the doomers and the optimists sometimes call it, right,
you I think would not put out this video slop product. Right, Like,
there are ways where they could have rolled it out
with like with a little bit more pretense, even though
(36:52):
probably would be ninety eight percent the same what you
can do with it, right, but with like it could
have got a little bit more pretense about like here
the precautions we're taking about copyright and pornography and here
are you know, and we hired whatever these artists, We
did a collab with these artists right to show you
different ways in which can be used for true creativity, right,
(37:13):
and here's a way it can use for medical diagnosis.
And like the marketing was kind of you know, maybe
they didn't do their market but they're usually a company
pretty savy about its marketing, right, Yeah, everything Sam Maltman
says is is savvy. But like, you know, maybe take
that back. All the different versions of names they have
for their products can be a little bit confusing, but like, yeah,
this to me says that like, you know, this company
(37:37):
is is Google, Facebook, Microsoft.
Speaker 1 (37:41):
Yes, they know that they're they're creating something that will
potentially cause a lot of negative effects, and they don't
particularly care right now because that is something that, as
you say, in the short term, like they don't have
agi and this is something that can get people's eyeballs
for a limited amount of time, which means money, et cetera,
(38:04):
et cetera. So it seems that, you know, we've come
a long way from Sam Altman's initial goals for the
company to now like Okay, let's just figure out a
way that we can increase immediate revenue right now. Consequences
kind of be damned, and we're going to sure, like
once we get soon and if something really bad happens,
(38:25):
we might start adding some guardrails on some of this stuff.
But in the meantime, you know, let's just put it
out there and yeah, well we'll see what happens. Nate,
have you seen the movie that came out, I think
on Netflix earlier this year, Mountainhead.
Speaker 2 (38:43):
No, so this was I I have fountain Head.
Speaker 1 (38:48):
Yes, yes, and in the movie, so they actually have
the exact same I think thought process about our intelligence
as Sam Altman does, which is that, oh, people are
going to miss this. So the movie even just like
makes a show. They're like, you called this mountain Head.
Is that supposed to be like Fountainhead? I was like, guys, guys,
(39:08):
we got it. You don't have to explicitly put it
in the dialogue. It's one of those cringe moments. But yes,
that is in fact a line of dialogue, which is
which is heartbreaking to me because I wanted to love
this movie. Disclaimer, I did not because it was written
by Jesse Armstrong, who was a creator and writer of
A Succession, which I think is one of the best
(39:30):
TV shows in recent memories. So that was it was
sad to me that I not not only did not
love this movie, but another disclaimer, turned it off probably
less than halfway through because I felt that it was
that bad. But the premise of it is that you
have all of these tech billionaires come to kind of
(39:51):
a house called mountain Head, like fountain Head, and they're all,
you know, they're all basically they're they're big dick. Contest
is you know whose net worth is hired at any
given moment, And one of the companies has just released
this AI video generation tool that is starting real world
(40:11):
riots as they are ensconced in their mountain retreats because
the videos content is so incredibly realistic. And one of
the other guys has a company that can actually help
people very accurately say no, this is AI and distinguish
from it, and his net worth is rising even faster,
and so he doesn't. You know, it's kind of yeah,
(40:34):
it's this kind of battle between morals and like letting
the world kind of go to hell because we'll be
fine because we're mega billionaires and look at our net
worth going up and up and up. But yeah, it
was very funny because when I saw Sora, I was like, wait,
this is kind of like that, you know, famous tweet
(40:55):
from forever ago. It was science fiction author in my book,
I invented the Torment Nexus as a cautionary tale tech
company at Long Last, we have created the Torment Nexus
from classics sci fi novel don'tate the Torment Nexus. So
this was, Yeah, this seems to be like exactly one
(41:16):
of those moments where I was like, wait, didn't we
just have a movie about the fact that maybe we
don't want to create a tool that does this exact thing.
So I thought that that was a pretty pretty funny
funny moment.
Speaker 2 (41:30):
Yeah. Well, look, I mean, for example, companies, if you know,
you can use their IP like their stock characters unless
they specifically opt out of it. Like that's kind of presumptuous, right,
Just it.
Speaker 1 (41:42):
Should be the other way around, by the way, you
should have to opt in, not opt out.
Speaker 2 (41:46):
I'm sure they've done financial modeling and like, very few
companies have been truly crippled by by a lawsuit, right,
Like Napster was kind of like the famous exception, right,
But for the most part, if you're growing super deeper
fast and you're going to lose a few of these
big lawsuits, right, you know, we're already in an environment,
(42:06):
particularly on Twitter slash x where uh where you have
you collect videos from the eight billion people in the world, right,
you collect something that happens somewhere to one of those
eight billion people, and like Ela Musk tweets, Elamus tweets it,
and it creates outrage. Right, sometimes those videos are are
manipulated or even fake, but like often they're not. Right,
(42:28):
it's just kind of like the cherry picking that you
get in there. And I think I'm a dystopian meter.
We're at like eight point two I think, right.
Speaker 1 (42:38):
A point two out of tenya yeah, yeah.
Speaker 2 (42:40):
Yeah, it's not linear. It's a long way to go
to get to a ten.
Speaker 1 (42:44):
Yeah, but I think I think eight point two we're
you know, we're certainly I think I think we are
as well. I think we are as well. And that's
such a positive note on which to leave our audience
this this fine week. But yeah, let's uh. I know
we say this so frequently at the end of our episodes,
but I really do. I'm very curious to see what's
(43:06):
going to end up happening with SOA and whether we
look back on this and are like, holy shit, you know, Nate,
you and I were so naive or whether we think
you know, we we were very prescient about this. You know,
I'm very interested to see what's going to go down,
and I hope that it's less negative than we have
(43:27):
envisioned it during the course of the last forty five minutes.
I hope that we don't hit at ten anytime soon,
but we shall.
Speaker 2 (43:35):
See log off, log off, flog up, go to.
Speaker 1 (43:41):
The bogging Nate and Maria logging off for today. We'll
see you all on Saturday. Let us know what you
think of the show. Reach out to us at Risky
Business at pushkin dot FM. Risky Business is hosted by
me Maria Kanakova.
Speaker 2 (44:02):
And by me Nate Silver. The show was a Cool
production of Pushing Industries and iHeartMedia. This episode was produced
by as At Carter. Our associate producer is Sonya gerwit
Lydia Jean Kott and Daphne Chen are our editors, and
our executive producer is Jacob Goldstein. Mixing by Sarah Brugert.
Speaker 1 (44:20):
If you like the show, please rate and review us
so other people can find us too, But once again,
only if you like us. We don't want those bad
reviews out there, Thanks for tuning in