All Episodes

February 6, 2025 • 54 mins

There are moments in history when people make huge technological advances all of a sudden. Think of the Manhattan Project, the Apollo missions, or, more recently, generative AI. But what do these moments have in common? Is there some set of conditions that lead to massive technological leaps?

Byrne Hobart is the author of a finance newsletter called The Diff, and the co-author of Boom: Bubbles and the End of Stagnation. In the book, Bryne makes the case for one thing that is really helpful if you want to make a wild technological leap: a bubble.

See omnystudio.com/listener for privacy information.

Mark as Played
Transcript

Episode Transcript

Available transcripts are automatically generated. Complete accuracy is not guaranteed.
Speaker 1 (00:15):
Pushkin. There are these moments when people make huge technical
advances and it happens all of a sudden, or at
least it feels like it happens all of a sudden.
You know this is happening right now, most obviously with AI,
with artificial intelligence. It happened not too long ago with rockets,

(00:37):
when SpaceX dramatically lowered the cost of getting to space.
Maybe it's happening now with crypto. I'd say it's probably
too soon to say on that one. In any case,
you can look at technological breakthroughs in different fields and
at different times, and you can ask, what can we
learn from these? You can ask, can we abstract certain

(01:00):
you know, certain qualities, certain tendencies that seem to drive
toward these burths of technological progress. There's a recent book
called Boom that asked this question and it comes up
with an interesting answer. According to the book, one thing
that's really helpful if you want to make a wild
technological leap a bubble. I'm Jacob Goldstein, and this is

(01:29):
what's your problem. My guest today is Burne Hobart. He's
the author of a finance newsletter called The diff and
he's also the co author of a book called Boom,
Bubbles and the End of Stagnation. When burn talks about bubbles,
he isn't just talking about financial bubbles where investors drive
prices through the roof. When he says bubble, largely he

(01:51):
means social bubbles, filter bubbles, little groups of people who
share some wild belief. He really gets it what he
means in this one sentence where he and his co
author write, quote, transformative progress arises from small groups with
a unified vision, vast funding, and surprisingly poor accountability end quote,

(02:14):
basically living the dream. Later in the conversation, Burne and
I discussed the modern space industry and cryptocurrency and AI.
But to start, we talked about two case studies from
the US in the twentieth century. Burn writes about him
in the book, and he argues that these two moments
hold broader lessons for how technological progress works. The two

(02:36):
case studies we talk about are the Manhattan Project and
the Apollo Missions. So let's start with the Manhattan Project,
and maybe one place to start it is with this
famous nineteen thirty nine letter from Albert Einstein and other
scientists to FDR the President about the possibility of the
nazis building an.

Speaker 2 (02:56):
Atomic bomb, right, So that letter it is it feels
like good material for maybe not a complete musical comedy,
but at least an act of a musical comedy, because
it's kind.

Speaker 1 (03:07):
Of a springtime for Hitler in Germany.

Speaker 2 (03:10):
There is this whole thing where you have this brilliant physicist,
but he is just kind of the stereotypical professor, you know,
crazy hair, very absent minded, always talking about these things
where no one, you know, no normal person can really
understand what he's talking about. And suddenly, instead of talking
about space time and that you know, energy and matter

(03:30):
in their relationship, suddenly he's saying someone could build a
really really big bomb and that person will probably be
a German, and that has some very bad implications for
the rest of the world.

Speaker 1 (03:41):
So now here we are, the president decides, okay, we
need to build a bomb, we need to spend a
wild amount of money on it. And this is a
thing that you describe as a bubble, which is interesting, right,
because it's not a bubble in the sense of market prices.
It's the federal government in the military, but it has
these other bubble like characteristics in your telling, right, maybe

(04:02):
other meanings of bubble, the way we talk about a
social bubble or a filter bubble. Tell me about that, like,
why is the Manhattan Project a kind of bubble?

Speaker 2 (04:09):
Why is it a bubble? Because there's that feedback loop
because people take the idea of actually building the bomb
more seriously as other people take it more seriously, and
the more that you have someone like more that you
have people like Oppenheimer actually dedicating time to the projects,
the more other people think the project will actually happen,
this is actually worth doing. So you have this group
of people who start taking the idea of building a

(04:32):
bomb more seriously. They treat it as a thing that
will actually happen rather than a thing that is hypothetically
possible if if this particular equation is right, if these
measurements are right, et cetera, and then they start actually
designing them.

Speaker 1 (04:43):
The man Hen Project seems like this sort of point
that gets a bunch of really smart people to coalesce
in one place on one project at one time. Right.
It sort of solves the coordination problem the way whatever
you might say AI today is doing that, Like just
brilliant people suddenly are all in one place working on
the same thing in a way that they absolutely would

(05:04):
not otherwise be.

Speaker 2 (05:05):
That is true. And this was both within the US
academic community and then within the global academic community, because
you had a lot of people who were in Central
Europe or Eastern Europe who realized that that is just
not a great place for them to be and tried
to get to the UK, US other Allied countries as
as quickly as possible. And so there was just this

(05:26):
massive intellectual dividend of a lot of the most brilliant
people in Germany and in Eastern Europe and in Hungary, etcetera.
They were all fleeing and all ended up in the
same country. So, yeah, you have just this this serendipity
machine where it was just if you were a physicist,
it was an incredible place for just overhearing really novel

(05:47):
ideas and putting those ideas, putting your own ideas to
the test, because you had all the all the smartest
people in the world pretty much in this one little
town in New Mexico.

Speaker 1 (05:57):
Right. So the Los Alamos piece is is the famous part,
you know, it's the part one has heard of with
respect to the Manhattan Project. There's a less famous part
that's really interesting, and that also seems to hold some
broader lessons as well, right, And that is the kind
of basically the manufacturing part of the how do we
enrich enough uranium to build the bomb? If the physicists

(06:19):
figure out how to design it, talk about that piece
and the lessons there.

Speaker 2 (06:23):
Yes, so that one, You're right, It is often under
emphasized in the history. It was more of an engineering
project than a research project, though there was a lot
of research involved. The purpose was get enough of the
get enough in rich uranium so the isotope that is
actually prone to these chain reactions, get it isolated, and

(06:44):
then be able to incorporate that into a bomb. They
were also working on other physom materials because there were
multiple plausible bomb designs. Some use different triggering mechanisms, some
use different materials, and there were also multiple plausible ways
to enrich enough of the physico material to actually build
a bomb. And so one version of the story is

(07:07):
you just go down the list and you the one
that you think is the most cost effective, most likely,
and so we choose one way to get just you
two thirty five, and we have one way to build bomb.

Speaker 1 (07:18):
You two thirty five is the enriched uranium. Yes, And that,
by the way, is the way normal businesses do things
in normal times. You're like, well, we got to do
this really expensive thing. We got to build a factory,
and we don't even know if it's going to work.
Let's choose the version that's most likely to work. Like
that is the kind of standard move right, yeah, right?
And then the problem though, is that if you try
that and you just you got unlucky. You kept the

(07:41):
wrong bomb design and the right phissile material or right
material of wrong bomb design, you've done a lot of
work which has zero payoff, and you've lost time. Right,
Like crucially, there is there is a huge sense of
urgency present at this moment that is driving the whole
thing really right.

Speaker 2 (07:57):
We could also do more than one of them in parallel,
and that is what we did. And on the manufacturing
side that was actually just murderously expensive. If you are
building a factory and you build the wrong kind of factory,
then you you've wasted a lot of a lot of
money and effort and time. So they just they did
more than one. They did several different processes for enriching

(08:17):
uranium and for perstonium.

Speaker 1 (08:19):
All at the same time, right, And they knew they
weren't going to use all of them, they just didn't
know which one was going to work. So it's like, well,
let's try all of them at the same time and
hopefully one of them will work. Yes, Like that is
super bubbly, right. That is wild and expensive. That is
just throwing wild amounts of money at something in a
great amount of haste.

Speaker 2 (08:38):
Yes. Yeah, And if you so, if you believe that
there's this pretty linear payoff, then every additional investment you make,
you know has it doesn't qualitatively change things. It just
means you're doing a little bit more of it. But
if you believe there's some kind of nonlinear payoff where
either this facility doesn't basically doesn't work at all or
it works really really well, then when you when you

(08:59):
diversify a little bit, you do actually get just this
this better risk adjusted return even though you're objectively taking
more risks.

Speaker 1 (09:06):
Interesting, right, So in this instance, it's if the Nazis
have the bomb before we do, it's the end of
the world as we know it, Yes, and so we
better take a lot of risk. And that's actually rational.
It reminds me a little bit of aspects of operation
warp speed. I remember talking to Susan Athea, Stanford economist

(09:26):
early in the pandemic who was making the case to
do exactly this with vaccine manufacturing in like, you know,
early twenty twenty, we didn't know if any vaccine was
going to work, and it takes a long time to
build a factory to make a vaccine, basically a tailor
of factory. And she was like, just make a bunch
of factories to make vaccines because if one of them work,
is we want to be able to start working on
it that day. Like that seems quite similar to this

(09:48):
and you work.

Speaker 2 (09:49):
Yeah, yeah, I think that's absolutely true that you you know,
the higher the stakes are, the more you want to
be running everything that can plausibly help in parallel. And
depending on the exact nature of what you're doing, there
can be some spillover effects. It's you know, it's possible
that you build a factory for manufacturing vaccine A and
vaccine doesn't work out, but you can retrofit that factory

(10:11):
and start doing vaccin B. And you know, there are
little ways to shuffle things around a bit, but you
often want to go into this basically telling yourself, if
we didn't waste money and we still got a good outcome,
it's because we got very, very lucky, and that we
only know we're being serious if we did, in fact
waste a lot of money. And yeah, I think that
kind of inverting your view of risk is often a
really good way to think about these big transformative changes.

(10:34):
And this is actually another case where the financial metaphors
do give useful information about just real world behaviors because
at at head funds, this is actually something that risk
teams will sometimes teleportfolio managers. Is you are making money
on two higher percentage of your trades. This means that
you are not making all the trades that you could
and if you made, if you took your hit rate

(10:54):
from fifty five percent down to fifty three percent, we'd
be able to allocate more capital to you, even though
you'd be annoyed that you were losing money on more trades.

Speaker 1 (11:01):
Interesting because overall you would likely have a more profitable
outcome by taking bigger risks and incurring a few more losses,
but winds would be bigger and make up for the losses.

Speaker 2 (11:11):
Yes, and this kind of thing, you know, it's very
easy if you're the one sitting behind the desk just
talking about these relative trade offs. It's a lot harder
if you are the first person working with uranium in
the factory and we don't quite know what the risks
of that are, but it is. It is just a
it's a generally true thing about trade offs that if
you about trade offs and risk, that there is an
optimal amount of risk to take. That optimal amount is
sometimes dependent on what the what the downside risk of

(11:34):
inaction is. And so sometimes if you're if you're too successful,
you realize that you are actually messing something up.

Speaker 1 (11:41):
Yeah, you're not taking enough risk. So we all know
how the Manhattan Project ends. It worked. I mean, it
is a little bit of a weird one to start with.
You know, the basic ideas like technological progress is good,
risks are good, and we're talking about building the atomic
bomb and dropping it on two cities and it's you know,
it's morally a much easier question if you think it's

(12:03):
the Nazis. Sorry, but the Nazis are absolutely the worst,
and I definitely don't want them to have a bomb first.
You know, there is the argument that more people would
have died in a conventional invasion without the bomb. I don't,
I don't know. I mean, how do you what do
you make of it? Like, obviously the book is very
pro technological progress, this show is basically pro technological progress.

(12:25):
But like the bomb isn't isn't a happy one to start?
I'm like, what do you make of it?

Speaker 2 (12:29):
Ultimately, Yeah, it's one of those things where you do
it does make me wish that we could we could
run the same simulation, you know, a couple of million
times and see what the what the net? You know,
loss and save lives are different scenarios, but one thing
the bomb. If you I guess from like a purely
utilitarian standpoint, I suspect that there have been net lives

(12:52):
save because of less use of coal for electricity generation,
more use of nuclear power, And that is directly downstream
of the bomb. That you can build these, you know,
buy design uncontrollable releases of atomic energy. You can also
build more controllable ones, and then getting the funding for
that would be a lot harder.

Speaker 1 (13:11):
And presumably we got nuclear power much sooner than we
otherwise would have because the incredibly rapid project progress of
the Manhattan Project. That's yes, there fair.

Speaker 2 (13:22):
Which I don't I don't think you know you if
you let me push the button on what I drop
an atomic bomb on a civilian population in exchange for
fewer people dying of respiratory diseases over the next couple decades,
you know, there, I would have to give it a
lot of thought.

Speaker 1 (13:38):
I'm not going to I'm not going to push that button,
but I'm never going to have a job where I
have to decide because I can't deal. Okay. Then you
mentioned in the book kind of in passing that was
really interesting and surprising to me was that nuclear power
today accounts for eighteen percent of electric power generation in

(14:00):
the US. Eighteen percent, Like that is so much higher
than I would have thought, given sort of how little
you hear about existing nuclear power plants, right.

Speaker 2 (14:08):
Like, that is a lot. Yeah, yeah, it is. It
is a surprisingly it's a president hand number. But also
nuclear power it is it is one of the most
annoying technologies to talk about, in the sense that it
doesn't do anything really really exciting other than provide essentially
unlimited power with minimal risk.

Speaker 1 (14:28):
And some amount of some amount of scary tail risk,
right ye, Like, I mean that is what is actually
interesting to talk about. Sort of unfortunately for the world,
given that it has a lot of benefits, there is
this tail risk, and once in a while something goes
horribly wrong, even though on the whole it seems to
be clearly less risky than say, a caul fired power.

Speaker 2 (14:50):
Plant, right, And the industry has they they're aware of
those risks, and nobody wants to be responsible for that
kind of thing, and nobody wants to be testifying before
Congress about ever having cut any corner whatsoever in the
event that a disaster happens. So they do actually take
that incredibly seriously. So nuclear power does end up being

(15:11):
in practice much safer than other power sources. And then
you add in the externality of doesn't really produce emissions,
and uranium exists in some quantities just about everywhere.

Speaker 1 (15:23):
No climate change, no local air pollution, has a lot
going for it. Always on. Okay, let's go to the moon.
So you write also about the Apollo missions us going
to the moon. It's the early sixties, right, was it
sixty one? Kennedy says we're gonna go to the moon

(15:44):
by the end of the decade. There's the Cold War context. Yes,
Kennedy announces this goal. What's the response in the US
when Kennedy says this, Yeah, so a lot of the response,
you know, at first people are somewhat hypothetically excited.

Speaker 2 (15:59):
As they start realizing how much it will cost, they
go from not especially excited to actually pretty deeply opposed.
And you know this shows up in there was a
someone coined the term moondoggle.

Speaker 1 (16:10):
Yeah, moondoggle. I loved moondoggle. I learned that from the book.
It was Norbert Wiener, like a famous technologist, not a
not a not a crank, right, somebody I knew what
he was talking about. Was like, this is a crazy idea.

Speaker 2 (16:23):
It's a moondoggle, right, And you know, this really works
its way to popular culture. Like if you if you
go on Spotify and listen to the Tom learror song
Verner von Braun, the recording that Spotify has, it opens
with a monologue that is talking about how stupid the
idea of the Apollo program is. It's a it and
you know this is a This is again someone who

(16:43):
is in academia, who's a very very sharp guy and
who is just feels like he completely sees through this
political giveaway program to big defense contractors and knows that
there's no point in doing this.

Speaker 1 (16:56):
You write that NASA's own analysis found a ninety percent
chance that that a failure of failing to reach the
moon by the end of the decade. Like, it wasn't
just outside people being critical, it was NASA itself didn't think.

Speaker 2 (17:10):
It was good work.

Speaker 1 (17:12):
There's a phrase you use in the book to talk
about talk about these sort of bubble like environments that
are of interest to you, and I found it really
interesting and I think we can talk about it in
the context of Apollo. That phrase is definite optimism. Tell
me about that phrase.

Speaker 2 (17:28):
Yes, So definite optimism is the view that the future
can and will be better in some very specific way,
that there will be there is something we cannot do
now we will be able to do in the future,
and it will be good that we can do it.

Speaker 1 (17:42):
And why is it important, Like it's a big deal
in your telling in an interesting way. Why is it
so important?

Speaker 2 (17:50):
It's important because that is what allows you to actually
marshal those resources, whether those are the people or the
capital or the political pull, to put them all in
some specific direction and say we're going to build this thing.
So we need to actually go step by step and
figure out, Okay, what specific things have to be done,
what discoveries have have to be made, what laws have

(18:10):
to be passed in order for this to happen. And
that is so it's definitely optimism in the sense that
you're saying, there is a specific thing we're going to build.
It's the kind of thing that can keep you going
when you encounter temporary setbacks. And that's that's where the
optimism part comes in, because if you have a less
definitely optimistic view of about that project, you might say

(18:31):
the goal of the Apollo program is to figure out
if we can put a person on the moon. But
I think what that leaves you open to is the
temptation to give up at any point, because at any
point you can have you know, a botched launch or
an accident, or you are you're designing some component and
the math just doesn't pencil out. You need you know

(18:51):
it's going to weigh too much to actually make it
onto craft. And you could say, okay, well that's how
we figured out that we're not actually doing this. But
if you do just have this kind of delusional view
that know that if there's a mistake, it's a mistake
in my analysis, not in the ultimate plan here, and
that it is physically possible, we just have to figure
out the all the details. Then I think that does
set up a different kind of motivation, because at that

(19:12):
point you can view every mistake as just exhausting the
set of possibilities and letting you narrow things down to
what is the correct approach. What you sort of needed
was this sort of very localized definite optimism where someone
could you could imagine a researcher thinking to themselves, or
an engineer or someone throughout the project thinking to themselves that, Okay,

(19:34):
this will probably not work overall, but the specific thing
I'm working on, whether it is designing a spacesuit or
designing this rocket or programming the guidance computer, that one
I can tell that my part is actually going to work,
or at least I believe that I can make it work.
And two, this is my only chance to work with
these really cool toys. So if the money is going
to be wasted at some point, let that money be

(19:56):
wasted on me. And I think that that kind of
attitude of just you know that you have one shot
to actually do something really interesting, you will not get
a second chance. If everyone believes that, it does become
a coordinating mechanism where now they're working extremely hard, they
all recognize that the success of what they are doing
is very much up to them, and then that ends
up contributing to this group success.

Speaker 1 (20:18):
So it's like this, if I'm going to do this,
I got to do it now. Everybody's doing it. Now,
we got the money. Now this is our one shot.
We better get it right, We better do everything we
can to make it work.

Speaker 2 (20:30):
Yes, fear of missing out?

Speaker 1 (20:33):
Yeah, fomo right, so fomo it's funny. People talk about
that as like a dumb investment thesis basically right, it's
like a meme stock idea, but you talk about it
in these more interesting context basically right, more meaningful. I
would say, yes, yeah.

Speaker 2 (20:48):
That it's so in the purely straightforward way. The idea
is there are sometimes these very time limited opportunities to
do something, and if you're capable of doing that thing,
this may be your only chance, and so missing out
is actually something you should be afraid of. So you know,
if you actually have a really clever idea for an
AI company, this is actually a time where you can

(21:08):
at least the attempt. So yeah, we do argue that
missing out is something you should absolutely fear.

Speaker 1 (21:13):
So what happens with the Apollo project, just even brief
like talk about just how big it is and how
risky it is, like it's it's striking right.

Speaker 2 (21:22):
Right, Yeah, so it is. It was running. The expenses
were running on like a low single digital percentage of
GDP for a while.

Speaker 1 (21:29):
So a couple percent of the value of everything everybody
in the country does is going into the Apollo mission.
Just this one plainly unnecessary thing that the government has
decided to do.

Speaker 2 (21:42):
Right, And this is one of the cases where there
was a very powerful spillover effects because the Apollo Guidance
computer needed the most lightweight and least power consuming and
most reliable components possible. And if you were building a
computer conventionally at that time and you had a budget,
you would probably build it out of vacum tubes. And

(22:02):
you knew that the vacuum tubes they're bulky, they consume
a lot of power, they throw off a lot of heat,
they burn out all the time, but they are fairly cheap.
But in this case, there there was an alternative technology.
It was extremely expensive, but it was it was lightweight,
didn't use a lot of power, and did not have
moving parts. And that's the integrated circuit, so transistor based computing.

Speaker 1 (22:26):
The chip, well we know today as the chip.

Speaker 2 (22:29):
Yes, the chip.

Speaker 1 (22:30):
You read that. In nineteen sixty three, NASA bought sixty
percent of the chips made in the United States, just NASA,
not though government, just NASA sixty percent.

Speaker 2 (22:41):
They actually bought more chips than they needed because they
recognized that the chip companies were run by, you know,
very very nice electrical engineering nerds who just love designing tiny,
tiny things, and that these people just don't know how
to run a business, and so they were worried that
fair Child Semiconductor would just run out of cash at
some point and then NASA would have half of a

(23:03):
computer and no way to build the rest of it.
So they actually over ordered the US. They used integrated
circuits for a few applications that actually were not so
dependent on the power consumption and weight and things. So
that critique of the Apollo program was directionally correct. It
was money being splashed out to defense contractors who were
favored by the government, but in this case it was
being done in a more strategic and thoughtful way and

(23:25):
kind of kept the industry going.

Speaker 1 (23:26):
So you talk a fair bit in the book about
the sort of religious and quasi religious aspects of these
little groups of people that come together in these bubble
like moments to do these big things, and that's really
present in the Apollo section, like talk about the sort
of religious ideas associated with the Apollo mission that the

(23:51):
people working on the mission had.

Speaker 2 (23:52):
Yeah, I mean, you name it after a Greek god
and you're already starting starting a little bit religious. So
there were people who worked on these missions who felt like,
this is part of mankind's destiny is to explore the stars,
and that there's this whole universe that is universe created
by God and it would be kind of weird. You know,
we can't second guess the divine, but it's a little

(24:14):
weird for God to create all of these astronomical bodies
that just kind of look good from the ground and
that you're not actually meant to go visit.

Speaker 1 (24:20):
You talk about somewhat similar things in other kind of
less obviously spiritual dimensions of people coming together and having
it kind of more than rational. You use this word
thimos from the Greek meaning spirit, like what's going on there?
More broadly, why is that important more generally for technological progress,

(24:43):
because the so th amos is part of this tripartite
model of the soul where you have your appetites.

Speaker 2 (24:50):
And your reason and then your themost like you're longing
for glory and honor and this kind of transcendent transcendent
achievement and logos reasoning it only gets you so far.
You can you can reason your way into some pretty
interesting things, but at some point you do decide that
the reason thing is probably to take it a little

(25:12):
bit easier and not take not take certain risks. And
it still is just this this pursuit of something, something greater,
and you know, something something beyond the ordinary, something really
beyond the logos, right, like beyond what you could get
to just by reasoning one step at a time. And
I think that that is, Yeah, that is that is
just a deeply attractive proposition to to many people. And

(25:36):
it's also a scary one because at that point, you know,
if you are if you're doing things that are beyond
what is the rational thing to do, then of course
you have no rational explanation for what you did wrong.
If you mess up. Yeah, and you are sort of
betting on some historical contingencies.

Speaker 1 (25:50):
That's the definite optimism part, right, Betting on historical contingencies
is another way of saying definite optimism. Right. So back
to the moon. So we get to the moon. In fact,
against all odds, we make it. Uh, And there's this
moment where it's like, you know, today the moon tomorrow

(26:12):
the solar system, but in fact it was today the
moon tomorrow, not even the.

Speaker 2 (26:18):
Moon, right, Like what happened? Well, you know, you had
asked about what these megaprojects have in common with financial bubbles,
and one of the things they have in common is
sometimes there's a bust, and sometimes that bust is actually
an overreaction in the opposite direction. And people people take
everything every true, you know, everything they believed in say

(26:38):
nineteen you know, nineteen sixty nine about humanity's future in
the stars, and they say, okay, this is this is
exactly the opposite of where things will actually go in
the exact opposite of what we should care about. That
we have plenty of problems here on Earth, and why
would we you know, do we really want to turn
Mars into just another planet? That also has problems of
racism and poverty and nuclear war and all that stuff.

(26:58):
So so maybe we should stay home and fix fix
our stuff. Then public policy, you'd actually need for there
to be some kind of resurgence in belief in space.
You need some kind of charismatic story, and perhaps to
an extent, we have that right now. Yes, maybe Elan's
not the perfect front man for all of this, but
he is certainly someone who demonstrates that space travel it
can be done, it can be improved, and that it's

(27:21):
just objectively cool. That it is just hard to watch
a SpaceX launch video and not feel something.

Speaker 1 (27:28):
Yes, so so good. I want to talk more about
space in a minute. So it's interesting these two stories
that are kind of in the middle of your book.
They're kind of the core of the book, right, these
two interesting moments that are non financial bubbles, when you
have this incredible technological innovation and a short amount of
time that seems unrealistic, unrealistically you know, fast impressive outcome.

(27:53):
And they're both pure government projects. They're both you know,
command and control economy. It is not the private sector,
it does not capitalism. What do you make of that?

Speaker 2 (28:06):
I would say, there's a very strong indirect link for
a couple of reasons. One is just the practical, kind
of the practically enough reason that personnelis policy, and that
the US government in the nineteen thirties, the US government
was hiring and the private sector mostly wasn't, and so
all the ambitious people, basically all the ambitious people in

(28:28):
the country tried to get government jobs. And that is
usually not the case, and there are there's certainly circumstances
where that's a really bad sign, but in this case
it was great. It meant that there were a lot
of New Deal projects that were staffed by the people
who would have been rising up the ranks at RCA
or General Electric or something a decade earlier. Now they're
running New Deal projects instead, and they're again rising up

(28:49):
their ranks really fast, having a very large real world
impact for early in their careers. And those people had
been working together for a while, and you know, they
knew each other. There was a lot of just institutional
knowledge about how to get big things done within the
US government, and a lot of that institutional knowledge could
then be redirected. So you have the New Deal and
then the war effort, and then you have the post

(29:09):
war economy where there's still you know, it takes a
while for the government to fully relax its control, and
then very soon we're into the Korean War. So yeah,
there was just a large increase in state capacity and
just in the quality of people making decisions within the
US government in that period.

Speaker 1 (29:28):
We'll be back in a minute to talk about bubble
esque things happening right now, namely rockets, cryptocurrency, and AI. Okay,
now to space today. Burn and I talked about SpaceX

(29:48):
in particular, because you know, it really is the company
that launched the modern space industry, and there's this one
key trait that SpaceX shares with the other projects Burn
wrote about in the book. It brought together people who
share a wild dream. If you go to work at SpaceX,
it's probably because you believe in getting humanity to mar Yeah.

Speaker 2 (30:10):
Yeah, it's not just that you believe in the dream.
But when you get the job, you're suddenly in an
environment where everyone believes in the dream. And if you're
working one of those organizations, you're probably not working nine
to five, which means you have very few hours in
your day or week where you are not completely surrounded
by people who believe that humanity will people will be
living on Mars, and that this is the organization that

(30:32):
will make it happen. And that just has to really
mess with your mind, Like, is what is normal to
an engineer working at SpaceX in two thousand and six
is completely abnormal to ninety nine point nine percent of
the human population. And you know most of the exceptions
are like six year old boys who just want Star
Wars for the first time.

Speaker 1 (30:49):
Mars's crazy, Yeah, I mean really, as I went through
the book, I was like, oh, really, the bubble you're
talking about is a social bubble, like the meaningful bubble.
Like maybe there's a financial bubble attachment, maybe there isn't,
But what really matters is you're in this weird little
social bubble that believes some wild thing together, that believes
it is not wild, that believes it is going to happen,

(31:09):
like that's the thing. Yeah, and has money and has
the money to act on their wild belief yes, And
so you know, getting the money does mean interacting with
the normy sphere, interacting with people who don't.

Speaker 2 (31:23):
Quite buy into all of it, but they when you
have these really ambitious plans and you're taking them seriously,
you're doing them step by step. Some of those steps
do have other practical applications, and so that is the
space story. It was not just street shot. We are
going to invest all the money Elon got from PayPal
into going to Mars and hopefully we get to Mark
before we run out. Yeah, it was you know, we're

(31:46):
going to build these prototypes, We're going to build reusable rockets.
We're going to use those for existing use cases, and
we will probably find new use cases. And then once
we get really really good at launching things cheaply, well,
there are a lot of satellites out there, and perhaps
we should have some of our own, and we can
do it at sufficient scale, then maybe we can just
throw a global communications network up there in the sky

(32:06):
and see what happens next. So yeah, that's you know,
the intermediate steps. Each one it's basically taking the themost
like the spirited. You know, here's our grand vision of
the future, and you know, here's my destiny and I
was put on earth to do this and saying okay, well,
the next step is have enough money to pay rent
next month, right.

Speaker 1 (32:24):
Tomorrow to get to Mars. So is there a space
bubble right now. I think so.

Speaker 2 (32:34):
I think there is I think there are people who
look at SpaceX and say this is achievable, and that
more is achievable. They also look at SpaceX and say
this is a kind of infrastructure, that there are things
like doing manufacturing in orbit or doing manufacturing on the Moon,
where in some cases that is actually the best place
to build something.

Speaker 1 (32:54):
Basically because SpaceX has driven down the cost so much
of getting stuff into orbit, new ideas that would have
been economically absurd twenty years ago, like manufacturing and space
are now plausible. And so this is the sort of
bubble building on itself, and like, why is it not
just an industry now? Why is it a bubble?

Speaker 2 (33:13):
In your telling, it is the feedback loop where what
SpaceX does makes more sense if they believe that there
will be a lot of demand to move physical things
off of Earth and into orbit, and perhaps further out
that if they believe that there's more demand for that,
they should be investing more on R and D. They
should be building better and bigger and better rockets, and

(33:35):
they should be doing the you know, big fixed cost
investment that incrementally reduces the cost of launches and only
pays for itself, you do a lot of them. And
then if they're doing that and you have your dream
of we're going to manufacture drugs in space and they
will be you know, like the marginal cost is low
once you get stuff up there, well that dream is
a little bit more plausible if you can actually plot

(33:56):
that curve of how much does it cost to get
a kilogram into space and say this, you know, there
is a specific ear at which point we would actually
have the cost advantage versus terrestrial manufacturing.

Speaker 1 (34:06):
So it's this sort of coordinating mechan and that like
you also write about with Microsoft and Intel in the
eighties and nineties, where it's like, oh, they're building better chips,
so we will build better software, and then because they're
building better software, will build better chips. So this is
like a more exciting version of that, right because it's
going to get even cheaper to send stuff to space.

(34:28):
We can build this crazy factory to exist in space,
and then that tells SpaceX, oh, we can in fact
keep building, keep innovating, keep spending money.

Speaker 2 (34:37):
Yes, And so someone has to do just half of that,
like the half of that that makes no sense whatsoever.

Speaker 1 (34:43):
That was SpaceX at the beginning, right, that was like, yes,
just a guy with a lot of money in a crazy.

Speaker 2 (34:49):
Yeah, it just really helps to have someone who's eccentric
and it has a lot of money and is willing
to throw it at a lot of different things. Like Musk.
He spent some substantial fraction of his network right after
this PayPal sale on a really nice sports car and
then immediately took it for a drive and wrecked. It
had no insurance and was not wearing a seat belt.
So the Elon mus story could have just been this

(35:10):
this proverb about dot com access and what happened when
you finally gave these people money as they immediately bought
sportscrossing Rerexiam. Instead, it's it's a story about a different
kind of success, but it's still I guess know what
that illustrates really risky.

Speaker 1 (35:25):
Yeah.

Speaker 2 (35:26):
Yeah, there's there's a risk level where you you are
going on going for a joy ride in your two
million dollar car and you haven't bothered to fill out
all the paperwork or by the insurance, and that is
the risk tolerance of someone who starts a company like SpaceX.

Speaker 1 (35:41):
Okay, enough about space let's talk about Crypto, formerly known
as cryptocurrency. Let's talk about bitcoin, and let's talk about
bitcoin especially at the beginning, right before it was number
go up, when it was it really was true believers, right,
it was people who had a crazy worldview like you're

(36:01):
talking about in these in these other contexts.

Speaker 2 (36:04):
Yes, so we we still don't know for sure who
said to A Nakamoto was, and I think everyone in
crypto has at least one guest, sometimes many guesses. But
whoever Satoshi was, whoever they were.

Speaker 1 (36:16):
This is the creator of bitcoin for the one person
who doesn't know.

Speaker 2 (36:19):
Yeah, they had this view that one of the fundamental
problems in the world today is that if you are
going to transfer value from one party to another, you
need some trusted intermediary.

Speaker 1 (36:32):
You need a trusted intermediary like a government and a bank. Right.
Typically in money you need both governments and banks the
way it works in the world today, right, Yes, And
Satoshi happened to publish the Bitcoin White Paper in October
two thousand and eight, which was a great moment to
find people who really didn't want to have to deal
with governments and banks when they were dealing with money

(36:53):
at the financial crisis, right right in the teeth of
the financial crisis.

Speaker 2 (36:57):
Yes, so hes So it is in one sense just
this this technically clever thing, and then in another sense
it's this very ideological project where he doesn't like central banks,
he doesn't like regular banks. He feels like all of
these institutions are corrupt, and you know, your money is
just an entry in somebody's database, and they can update
that database tomorrow and either change how much you have

(37:18):
or change what it's worth, and we need to just
build something new from a clean slate. And there's also
I think there's this tendency among a lot of tech
people too. When you look at any kind of communications
technology and money broadly defined as a communication technology, you're
always looking at something that has evolved from something simple,
and it has just been patched and altered and edited
and tweaked and so on until it works the way

(37:40):
that it works. But that always means that you can
easily come up with some first principles view that's a
whole lot cleaner, easier to reason about amidst some mistakes,
and then you often find that, Okay, you emitted all
the mistakes that are really really salient about FIAT, but
then you added some brand new mistakes or added mistakes
that we haven't made in hundreds of years. So there
it's full of trade off.

Speaker 1 (38:00):
It gets complicated, but at the beginning, right, so the
white paper comes out and you know, I covered I
did a story about Quinn in twenty eleven, which was
still quite early. You know, we'd shocked that it had
gone from ten dollars a bitcoin to twenty dollars a bitcoin.
Thought we were reading it wrong. And at that time,
like I took to Gavin Drieson, who was very early

(38:21):
in the bitcoin universe, like he was not in it
to get rich, right, Like he really believed, he really
believed in it, and that was the vibe then, and
like he thought it was gonna be money, right. The
dream was people will use this to buy stuff. And
one thing that is interesting to me is, yeah, some
people sort of use it to buy stuff, but basically

(38:44):
not right like that it would go from twenty dollars
a bitcoin to one hundred thousand dollars a bitcoin without
some crazy killer app, without becoming the web, without becoming
something that everybody uses whether they care about it or not.
That I would not have guessed. And it seems weird
and plainly now crypto is full of some people who
are true believers and a lot of people who just

(39:04):
want to get rich, and some of them are pretty scammy.

Speaker 2 (39:08):
Yeah. Yeah, there's like the grifter coefficient always goes up
with the price, and then you know, the true believers
are still there during the next eighty percent draw down.
And I'm sure there will be a drawdown something like
that at some point in the future. It's just that
that's kind of the nature of these kinds of assets.
The bitcoin, it was originally conceived as more of a currency,
and so so she talked about some hypothetical products you

(39:30):
could buy with it, and then one like the first
bitcoin killer app to be fair was e commerce. It
was specifically drugs. Yes, it is.

Speaker 1 (39:40):
It is a very very.

Speaker 2 (39:41):
Libertarian product in that way. So so it doesn't it
doesn't work very well as a dollar substitute for many reasons,
you know, most of the obvious reasons. But it is
interesting as a gold substitute, where part of the point
of gold is that it is very divisible and your
gold is the same as my gold, and we've all
kind of collectively agreed that gold is worth more than

(40:02):
the value than its value as just an industrial product.
And then and the neat thing about gold is it's
really hard to dig up anymore. Gold supply is extremely anelastic.

Speaker 1 (40:12):
And so and bitcoin is designed to have a finite supply, right, Yeah,
important analogy, Yeah, yes, more generally, like it's what, it's
a long time out now, it's you know, seventeen years
or something since the white paper. What do you make
of the sort of costs and benefits of cryptocurrency so far?

(40:34):
The costs are more obvious to me, Like, there's a
lot of grift. It's you know, by design, very energy intensive,
Like I'm open to like better payment systems. There's lots
of just like boring efficiency gains you would think we
could get that we haven't gotten, right, Yeah, what do
you think about the cost versus the benefits so far?

Speaker 2 (40:54):
I think I think in terms of the present value
of future gains, probably better off. I think in terms
of yeah, realize gains so far, worse off? Huh so
basically worse off so far, but in the long run
will be better off. We just haven't got the payss yet.
This is actually something that general purpose technologies. It is
a feature of general purpose technologies that there's often a
point early in their history where the net benefit has

(41:15):
been negative.

Speaker 1 (41:16):
What what what wouldn't make it clearly positive? Like what's
the killer return you're hoping to see from cryptocurrency?

Speaker 2 (41:24):
Yeah, so I think I think the killer return would
be if there is a a financial system that is
open in the sense that starting starting a financial institution,
starting a bank or an insurance company or something is
basically you write some code and you click the deploy
button and your code is running. You have capitalized your
little entity, and now you can provide whatever it is.

(41:44):
Like mean, tweet insurance. You're selling people for a dollar
a day. You will pay them one hundred dollars if
there's a tweet tweet that makes them cry, you know
in your insurance business, Yes, you know, you get to
speed run all kinds of financial history. I'm sure you
learn all about adverse selection. But like a financial system
where anything, anything can be plugged into something else and
basically everything is an API call away is just a

(42:07):
really interesting concept. And there's the field system is moving
in that direction, but slowly, and just.

Speaker 1 (42:12):
To be clear, like why is it? Why is that
better on balance? So for it to for it to
be net positive that has to be not only interesting,
but that has to like lead to more human flourishing
and less suffering than we would have in its absence.

Speaker 2 (42:26):
Right, Yeah, markets provide large positive externalities. There's a lot
of effort in those markets that feels wasted. But it
is like markets transmit information better than basically anything else
because what they're always transmitting is the information you actually
care about. So like oil prices, you don't have to
know that oil prices are up because there was a

(42:50):
terrorist attack or because someone drilled a dry hole or whatever.
You what you respond to is just gas is more expensive,
and therefore I will drive less, or you know, energy
is cheaper or more expensive, and so I need to
change my behavior. So it's always transmitted the actually useful
information to the people who would want to use it.
And the more complete markets are and the more things

(43:10):
there are where that information can be instantaneously transmitted to
the people who want to respond to it, the more
everyone's real world behavior actually reflects whatever the underlying material
constraints are on doing what we want to do.

Speaker 1 (43:21):
The sort of crypto dream there is just more more
finance markets, more feedback, more market feedback, better financial services,
as a result. That's the that's the basic view you're
arguing for it, and it's just.

Speaker 2 (43:37):
A really interesting way to build up new financial products
from first principles. And sometimes you learn whether those first
principles are wrong, but that itself is valuable, Like there
there is actual value in understanding something that is a
tradition or a norm and understanding why it works and
therefore deciding that that norm is actually a good norm.

Speaker 1 (43:56):
Good last one? All right, you know what it's going
to be. You tell me what the last one is?

Speaker 2 (44:03):
Is AI a bubble?

Speaker 1 (44:05):
Yes? But you sound so sad about it? Of course
we've got to talk about AI, right, are you sad?
Talk about AI? Like it's exactly like what what you're
writing about. Yeah. When you hear Sam Altman talk about
creating open AI, starting open AI, it's like we basically said,

(44:27):
you know, we're going to make a GI artificial general intelligence.
Come work with us. And when he talks about it's
like there was a universe of people who were like
the smartest people who really believed, oh that's what they
wanted to do, so they came and worked with us,
which seems like exactly your story.

Speaker 2 (44:45):
Yes, it turns out that a lot of people have
had that dream and for a lot of people, and
maybe it wasn't what they were studying in grad school,
but it was why they ended up being the kind
of person with major computer science and then try to
get a PhD in it and you know, would go
go into a more researchy end of the software world.
So yeah, there were there were people for whom this was.
It was incredibly refreshing to hear that someone actually wants

(45:07):
to build a thing.

Speaker 1 (45:08):
So you have that kind of shared belief. I mean,
at this point, you have these other elements of what
you're talking about, right, like a sense of urgency, an
incredible amount of money, elements of spiritual or quasi spiritual belief.

Speaker 2 (45:29):
Yes, there are pseudonymous open Aye employees on Twitter who
will tweet about things like building God. So yeah, they're
they're taking it in a weird spiritual direction. But there,
I think there there is something you know, it is
it is interesting that a feature of the natural world
is that you can actually if you put enough of
a you know, you you arrange refined sand and a

(45:51):
couple of metals in exactly the right way and type
in the right incantations and add a lot of power
that you get something that appears to think and that
can trick someone into thinking that it's a real human being.

Speaker 1 (46:03):
The is it good or is it bad? Question is
quite interesting here, obviously too soon to tell well, but
striking to me in the case of AI that the
people who seem most worried about it are the people
who know the most about it, which is not often
the case, right. Usually the people doing the work building
the thing just love it and think it's great. In

(46:24):
this case, it's kind of the opposite.

Speaker 2 (46:26):
Yeah, I think the times when I am calmst about
AI and least worried about it taking my job are
times when I'm using AI products to slightly improve how
I do my job that is better natural language search,
or actually most of it is processing natural language. When
there are a lot of pages I need to read

(46:48):
which contain you know, if it's like a thousand pages
of which five sentences matter to me, that is a
job for the API and not a job for me.
But it is now a job that the API and
I can actually get done. And my function is to
figure out what those five sentences are and figure out
a clever way to find them and then the AI's
job is to do the grant work of actually reading
through them.

Speaker 1 (47:07):
That's AI as you full tool, right, that's the happy
AI story.

Speaker 2 (47:11):
Yeah, And I actually think that preserving preserving your own
agency is a pretty big deal in this context. So
I think that if you are, if you're making a decision,
it needs to be something where you have actually formalized
it to the extent that you can formalize it, and
then you have made the call. But for for a
lot of the gruntwork, AI is just it's a way
to massively parallelize having an intern.

Speaker 1 (47:34):
Plainly, it's powerful, and you're talking about what it can
do right now. I mean, the smartest people are like, yes,
but we're gonna have AGI in two years, which I
don't know if that's right or not. I don't know
how to evaluate that claim. But it's a wild claim.
It's plainly not obviously wrong on its face, right, it's possible.
Can you even start to parse that. You're giving sort

(47:56):
of little things today about oh, here's a useful tool,
and here's the thing I don't use it for. But
there's a much bigger set of questions that seem imminent.

Speaker 2 (48:03):
You know, there's certain kinds of radical insert day there.
You know, I think it increases wealth inequality, but also
means that intelligence is just more abundant and is available
on demand and is baked into more things. I think
that it's you know, you can definitely sketch out really
really negative scenarios. You could sketch out, you know, not

(48:23):
end of the world, but maybe might as well be
for the average person. Scenarios where every white collar job
gets eliminated and then a tiny handful of people have
just unimaginable wealth and you know, rearrange the system to
make sure that doesn't change. But I think there are
a lot of intermediate stories that are closer to just
the story of say, accountants after the rise of Excel,

(48:44):
where there were parts of their job that got much
much easier and then the scope of what they could
do expanded. It was the.

Speaker 1 (48:50):
Bookkeepers who took it on the chin. It turns out, yeah,
like Excel actually did drive bookkeepers out of work and
it made accountants more powerful.

Speaker 2 (49:00):
Yeah, So, you you know, within I think within a
kind of company function, you'll have specific job functions that
do mostly go away, and then a lot of them
will evolve, and so the way that AI seems to
be rolling out in big companies in practice is they
generally don't lay off a ton of people. They will
sometimes end outsource contracts, but in a lot of the case,

(49:22):
a lot of cases, they don't lay people off. They
change people's responsibilities. They ask them to do less of
one thing and a whole lot more of something else.
And then in some cases that means they don't have
to do much hiring right now, but they think that
a layoff would be pretty demoralizing, so they sort of
grow into the new cost structure that they can support.
And then in other cases there are companies where they realize, wait,

(49:44):
we can ship features twice as fast now, and so
our revenue is going up faster, So we actually need
more developers because our developers are so much more productive.

Speaker 1 (49:55):
We'll be back in a minute with the lightning round. Okay,
let's finish with the lightning round. The most interesting thing
you learned from an earnings call transcript in the last.

Speaker 2 (50:12):
Year, most interesting thing from a transcript in the last year,
I would say there was a point, this might have
been a little over yeargo. There was a point at
which Satia Nadella was talking about Microsoft's AI spending, and
he said we are still at the point, and I
think he and Zuckerberg both said something to the same
effect in the same quarter, which is very exciting for

(50:35):
Nvidia people. But it was like, we're at the point
where we see a lot more risk to underspending than
to overspending on AI specifically.

Speaker 1 (50:43):
That really speaks to your book, right, that really is
like bubbly as hell in the context of your book,
like overspending, like the Apollo missions, like the Manhattan Project,
like the big risk is that we don't spend enough.

Speaker 2 (50:56):
And also they know that their competitors are listening to
these calls too, so they were also seeing that this
is kind of a winnable fight that they do think
that there is a level of capital spending at which
Microsoft can win simply because they took it more seriously
than everybody else.

Speaker 1 (51:11):
So he's like, yes, We're going to spend billions and
billions of dollars on AI because we think we can win.
Zuckerberg implicitly, what's one innovation in history that you wish
didn't happen?

Speaker 2 (51:36):
I wish there were some reason that it was infeasible
to have really, really tight feedback loops for consumer facing apps,
particularly games.

Speaker 1 (51:48):
Is that a way of saying you wish games were
less addictive?

Speaker 2 (51:51):
Yeah, I wish games were less addictive, or that they
didn't get as they weren't as good at getting more addictive.
So I wrote a piece in the newsletter about this recently,
because there was that wonderful article in the Loneliness Economy
in the Atlantic a couple of weeks back that was
talking about it. We just spend One of the pandemic
trends that has mean reverted the least is how much
time people spend alone. And I think one of the
reasons for that is that all the things you do alone,

(52:14):
they are things that produce data for the company that
monetizes the time that you spend alone. And so the
fact that we all washed a whole lot of Netflix
in the spring of twenty twenty means that Netflix has
a lot more data on what our preferences are.

Speaker 1 (52:27):
So they got better at making us want to watch Netflix,
and all the video games we played on our phones
got better at making us addicted to keep playing video
games on our phones. Yeah, that's a bummer. It's a bummer.
What was the best thing about dropping out of college
and moving to New York City at age eighteen.

Speaker 2 (52:48):
So I would say that it was it really meant
that I could could and had to just take full
responsibility for outcomes, and that I get to get to
take a lot more credit for what I've done since then,
but also get a lot more blame. Where there isn't
really a brand named fall back on and so if

(53:10):
someone hires me, they can't say this person got a
degree from institution X. You know I didn't eve. I
dropped out of a really bad school too, So there's
there's not even like the not even the the extra
upside of you know, I startup was so great, I
just had to leave Stanford after only a couple of semesters. No,
it was it was Arizona State and I didn't even party,
so they yeah, but yeah, it's it's that it's just

(53:35):
being being a little more in control of the narrative
and also just knowing that it's it's a lot more
up to me.

Speaker 1 (53:42):
What was the worst thing about dropping out of college
and moving to New York at eighteen?

Speaker 2 (53:46):
So one time I went through a really really long
interview process for a job that I really wanted and
at the end of many, many rounds of interviews and
you know, a work session and lots of stuff. The
hiring committee rejected because I didn't have a degree and
that was on my resume, so that was kind of inconvenient.
I guess another another downside, like it might have been

(54:09):
to spend more time with fewer obligations and access to
a really good library.

Speaker 1 (54:22):
Burn Hobart is the co author of Boom Bubbles and
the End of Stagnation. Today's show was produced by Gabriel
Hunter Cheng. It was edited by Lydia Jeane Kott and
engineered by Sarah Brugier. You can email us at problem
at Pushkin dot FM. I'm Jacob Goldstein and we'll be
back next week with another episode of What's Your Problem.
Advertise With Us

Popular Podcasts

Bookmarked by Reese's Book Club

Bookmarked by Reese's Book Club

Welcome to Bookmarked by Reese’s Book Club — the podcast where great stories, bold women, and irresistible conversations collide! Hosted by award-winning journalist Danielle Robay, each week new episodes balance thoughtful literary insight with the fervor of buzzy book trends, pop culture and more. Bookmarked brings together celebrities, tastemakers, influencers and authors from Reese's Book Club and beyond to share stories that transcend the page. Pull up a chair. You’re not just listening — you’re part of the conversation.

On Purpose with Jay Shetty

On Purpose with Jay Shetty

I’m Jay Shetty host of On Purpose the worlds #1 Mental Health podcast and I’m so grateful you found us. I started this podcast 5 years ago to invite you into conversations and workshops that are designed to help make you happier, healthier and more healed. I believe that when you (yes you) feel seen, heard and understood you’re able to deal with relationship struggles, work challenges and life’s ups and downs with more ease and grace. I interview experts, celebrities, thought leaders and athletes so that we can grow our mindset, build better habits and uncover a side of them we’ve never seen before. New episodes every Monday and Friday. Your support means the world to me and I don’t take it for granted — click the follow button and leave a review to help us spread the love with On Purpose. I can’t wait for you to listen to your first or 500th episode!

Dateline NBC

Dateline NBC

Current and classic episodes, featuring compelling true-crime mysteries, powerful documentaries and in-depth investigations. Follow now to get the latest episodes of Dateline NBC completely free, or subscribe to Dateline Premium for ad-free listening and exclusive bonus content: DatelinePremium.com

Music, radio and podcasts, all free. Listen online or download the iHeart App.

Connect

© 2025 iHeartMedia, Inc.