All Episodes

November 3, 2025 77 mins

Join us as we welcome Jacob Ward, a veteran journalist and thought leader, to explore the profound impact of artificial intelligence on our lives. In this episode, we delve into how AI interacts with human behavior, the societal implications of predictive policing and surveillance, and the future of work in an AI-driven world. We also discuss the concept of a technocracy, the Fermi Paradox, and the importance of purpose in human satisfaction. Tune in for a thought-provoking conversation that navigates the cultural and economic shifts shaping our future. Welcome back to Infinite Rabbit Hole!

Check out more of Jacob's work at https://www.jacobward.com/

For everything IRH, visit ⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠InfiniteRabbitHole.com⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠ Join us live every Sunday on ⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠Twitch.tv/InfiniteRabbitHole⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠ at 8PM CST!

*Make sure to check out the updated MERCH SHOP by clicking the "Merch" tab in the website!!!* Its a great way to help support the show!

Mark as Played
Transcript

Episode Transcript

Available transcripts are automatically generated. Complete accuracy is not guaranteed.
(00:00):
Hey travelers. In today's episode, we're
thrilled to welcome Jacob Ward, a veteran journalist and thought
leader focus on the complex relationship between technology,
science, and human behavior. Jacob brings his extensive
experience as a correspondent for Major Networks and his
insights from his critically acclaimed book The Loop to our

(00:22):
discussion. We'll dive into the fascinating
world of artificial intelligenceand its profound impact on our
lives. Explore how AI interacts with
our human behavior, often tapping into our automatic
decision making processes drivenby our quote UN quote lizard
brain. We'll discuss the societal
implications of AI, from potential job losses to ethical

(00:45):
concerns, and how predictive policing and surveillance are
reshaping our privacy landscape.As we look to the future,
question the value of human labor in an AI driven world in a
potential mental health challenges that would arise from
increased free time and a lack of purpose.
We'll also touch on the concept of a technocracy, where a few

(01:08):
hold significant power due to technological advancements, and
consider the vastness of the universe with the Fermi Paradox
and the possibility of extraterrestrial life.
Join us as we navigate these cultural and economic shifts,
emphasizing the importance of purpose and human satisfaction,

(01:29):
and the need for long term thinking to address global
challenges. Welcome back to Infinite Rabbit
Hole. Welcome back to The Infinite

(02:15):
Rabbit Hole. I'm your host, Jeremy, and
tonight we have a special guest like we typically do.
This one, though, is going to dissect a very interesting and
current event related topic, AI and a lot of the aspects of AI
that you're not thinking about. So today we are thrilled to be
joined by Jacob Ward, a veteran generalist, author and thought

(02:40):
leader focused on the complicated relationship between
technology, science, and human behavior.
For years, Jacob has served as acorrespondent for major networks
including NBC News, where he reported on technology and its
social implications for shows like Nightly News and the Today
Show. His extensive career includes

(03:01):
work with PBSCNN and Al Jazeera,and he previously held the
position of Editor in Chief of Popular Science Magazine.
He is the host and Co writer of the Landmark for our PBS series
Hacking Your Mind, which explores the science of decision
making and bias. Most importantly, we're here

(03:21):
today to discuss his work on hiscritically acclaimed book, The
Loop. How AI is creating a world
without choices and how to fightback in the loop, Jacob warned
us about the unintended consequences of artificial
intelligence on our free will and decision making, offering a
vital guide for navigating an increasingly automated world.

(03:46):
Jacob, welcome to the. I really appreciate you having
me, this is very exciting. Good, good, good AI is your is
your your bread and butter rightnow.
I know the you've done a lot of work into psychological topics,
right? Especially influence.
How did you get into all this? Let's start with the.
Let's start with the very beginning.

(04:08):
Sure. So I really appreciate you guys
having me. The So I've been a technology
correspondent for a long time. And Once Upon a time, technology
correspondents were sort of expected to be just kind of
upbeat nerds who thought only about like, which phone is
better, right? And around 2013 and I started
working first for Al Jazeera andthen I did this PBS series.

(04:28):
I was thinking more and more about technology as a kind of
reflection of society. I had a, an early mentor who
used to ask this question. A lot of the time he'd say, what
is the technology trying to tellus about ourselves, right?
We build stuff specifically to address something in ourselves,
a vulnerability, a market opportunity, whatever it is.

(04:49):
And so as I was starting to think about that, I had this
opportunity to do this PBS series Hacking Your Mind.
And Hacking Your Mind was a fourhour kind of crash course in
about the last 50 years of behavioral science.
And I got the opportunity to travel all over the world and
interview all these people who study why we make choices the
way we do and what the basic take away of that series is, as

(05:10):
is the basic take away of the last 50 years of all of this
behavioral science. Is that a huge amount of our
decision making? Probably 85 to 90% at any given
time is totally automatic and based on very, very ancient
instincts, instincts that we share with primates like, you
know, choosing who to trust, what to eat, where to go.

(05:34):
One big example of this is like if, if any of us right.
So I, I just spent the weekend driving my kid from volleyball
game to volleyball game. If I'm assigned to drive my kid
somewhere, it's as often as not,I might just automatically drive
her by accident to school because that is the standard
route you take every morning, right?
And so I don't know how many, how often you've been in the

(05:55):
position, right? You guys have like, you show up
at the school and you're like, oh, wait, this is not where I'm
supposed to take her. Oh, my God, why have I done
this? And the fact is, right, if you
cast your mind back across that drive like you've done an
incredibly sophisticated mechanical task with no
consciousness at all, you're on total autopilot, right?
And that kind of autopilot is what all these behavioral
scientists have shown is drivinga huge amount of our choices in

(06:18):
our day-to-day lives. So the same time that I was
learning about all of that and the way in which those choices
that we make are very predictable, that the mistakes
we make are very predictable. I was also in my day job as as a
technology correspondent, learning about these fly by
night companies that were starting to use early machine
learning and what was at the time called human reinforcement
learning algorithms. This was early AI, before

(06:42):
ChatGPT became possible. And I thought, oh, this is a
real problem. I'm not the kind of person who
believes, like, I have to get myideas into the world at all
costs. But this was a moment where I
was like, man, I don't see anybody else talking about the
intersection of our vulnerabilities in terms of our
brains and our behavior and thisfor profit industry that's about

(07:03):
to explode around the commercialization of AI.
And I thought you guys that I was like 5 years early.
I thought I was like, you know, way out in front of this.
And I had a few people read thisbook and be like, I don't know,
this sounds like science fiction.
This is pretty speculative. You know, when we first
published the book in 2020, the first issue, the first version
of it came out in 20. No, sorry early or sorry at the

(07:24):
very beginning of 2022, we didn't even put the word AI on
it because we thought, oh, this is going to that'll, that'll
alienate people. People will think this is a book
for nerds. And, and we don't and we want
people to, to, you know, everyone to read this.
And then nine months later, ChatGPT comes out and suddenly
my thesis comes to life. And you know, I mean, I, I just,
I'm seeing my book that was supposed to predict something

(07:47):
way out in advance and try to hopefully warn people against it
in some way. I was encouraging people to get
involved in policy and in lawsuits to try and slow this
down. Suddenly it's just taken off.
And I, I have to admit to you that like the last like year and
a half, I've kind of been in a little bit of a depressive
slump, just kind of watching my thesis come to life.

(08:07):
I've just been so bummed, honestly, that I've kind of
withdrawn from the topic. I've just been like surfing and
hanging out with my kids. I got laid off from NBC News
last year. And so I've had some severance
and some time to just like chill.
And now I'm suddenly looking around.
I'm going, you know what, I got to start talking about this
again. I got to get re engaged in this
topic because it is happening sofast and it's poised to, I

(08:29):
think, transform our world and not, I would argue necessarily
for the better unless we start making some big moves right now.
And so that's why I'm joining you today is trying to just sort
of I'm, I'm trying to have as many of these conversations as I
can to say, listen, I don't think our brains are ready for
this. That's fundamentally my thesis
here. So which phone is the best then?
Yeah. All that stuff.

(08:52):
So, So what? I know, right?
Exactly. I can't tell you how often I, I,
this is exactly what I was like.I'll get into a long
conversation with somebody aboutlike agency and human will and
blah, blah, blah, blah. And then they'll be like, and by
the way, I'm having some e-mail trouble.
Can you help me with my e-mail? That's really funny.
Well, you know, if you're offering, you know, yeah.
Yeah, I know, right? So you're just like a

(09:12):
overpowered IT, right? Yeah, yeah, exactly, exactly.
I don't know about all that. But anyway, can you help me with
my Ring camera? No, we, we've recently dove deep
quite a bit into AI and the troubles that it poses.
Jeff, did you have a question, man?
So I'm sorry bud. No, no, you're good.
Go ahead. OK.
Yeah, we've, we've dove very heavily into it.

(09:32):
I can guarantee you there's going to be topics that you may
not have ever been asked to comeout of our mouths today.
It's let's. Go.
I'm really excited to hear about.
It but I I want to get I want toget everything else out of the
way first before we get into, you know, the kind of freestyle
questions and everything. Jake, Jeff, do you guys have any
questions before I dive into theones that I have?

(09:53):
Nope. I hope you're ready, Jacob,
because I'm the crazy 1, so we're going to be.
Good, good, good is. I don't have any questions, I'm
just like, I'm just interested in the topic.
I'm so against AI it's not even funny and I just watched that
artificial intelligence movie and Bicentennial Man yesterday
and so. I'm pretty well versed.

(10:14):
You're well poised for this conversation.
You are ready. He did all of this studying.
Look at him. It's the first time he's ever
done studying. But yeah, well, I can't say I'm
a huge fan of it, but I can see it benefit benefiting us in
limited usages. But I can definitely see the
danger that's on the horizon. It's actually terrifying.

(10:35):
Well, here's the thing, I'll just sort of say maybe this will
kind of set off some some questions and some conversation.
Like I want to say, like, I am not opposed to it as a
technology. This is it's almost always the
case for me that like a technology in and of itself is
not a problem, you know, so, so in the case of AI, you know,
there are incredible uses, right?
And as the, as a guy who used to, you know, run a magazine

(10:57):
that was all about foundational science, like the incredible
work that can be done with a pattern recognition system made
possible by AI to turn, you know, the, the, the truth, the
revolution that made all of these LLMS and generative AI
possible, right? Is this, is this transformer
model revolution that in 20/17/2018 made it possible to

(11:17):
suddenly take huge amounts of undifferentiated data that you
could never as a human being sitand sift through and find
patterns in funnel and get all these insights out of the bottom
of it. If you think of all the things
that scientists could pour into a funnel like that, right?
Like every single photograph of a, of a mole on someone's back
that might or might not be cancerous, right?

(11:38):
It, it's incredible at finding the patterns in which predicting
who's mole is going to turn cancerous and who's is not.
You pour all of the, you know, uncharted, unnamed, well,
everything's named, but all but all of the stars in the sky,
right. And you and you look for
patterns in how they are moving or what's going on.
You know, incredible discoveriescan come out of that.

(12:00):
I've talked to people who say, you know, I've, I know a guy who
is working on a system. He, he, he exclusively tries to
make AI systems for either not-for-profit sort of
healthcare systems or for like state governments and, and he
makes no money as a result. And his thing is, you know, he
says if you gave me everybody's birth certificate and everyone's

(12:22):
home address, I could use AI to predict whose apartments need to
be repainted in advance to coverup the lead paint and avoid lead
poisoning in, in kids. You could wipe out lead
poisoning in young kids. You know, like the ability to do
incredible stuff like that is, is enormous and fantastic.
But here's the problem, you guys, no one's making money

(12:43):
stripping lead paint out of apartments.
Nobody's making money porn starsinto funnels, right?
That is not how we work in this country.
Instead, you're in a world already where the leadership of
these companies is saying, and now we're going to start, you
know, using this to let you write erotica or make cartoon
porn girlfriends for yourselves or whatever it is, right?

(13:04):
It's it's how you make money offit that turns the technology
into something bad. And in my case, I was just at a
conference where I was watching presentation after presentation
after presentation by people whowork in what's called trust and
safety. These are the sort of content
moderators at at social media platforms.
And, you know, they were just showing time and again and again
here, look, here's the photographs that AI can generate

(13:28):
that absolutely no one will be able to spot as being fake.
Here's the political messaging that turns out to be way more
persuasive, not way more, but substantially more persuasive
than than messaging written by an actual human.
And it doesn't even have to be written by somebody who who
writes the language, who even understands our language.

(13:48):
So you just are seeing time and again and again that this stuff
can be used for great stuff, butit can also be used for terrible
stuff. And that terrible stuff tends to
make us more money. And so that's my big problem.
That's my big push back against it.
It's not that AI can't be used for amazing things, but as a
human species, we tend to do theworst things with what we have.

(14:09):
And so, you know, my big concernis, you know, the whole deep
fake technology, at some point it's going to get, so it's going
to get so freaking difficult to tell what's a fabricated photo
or a, you know, like if I wantedto make a, a video of Jacob Ward
drowning puppies in a river in order to ruin his life, you
know, I could do that in 10 seconds, put it out there.

(14:33):
And then you're still going to have people out.
Even if it's able to be, you know, shown that this is, you
know, baloney. It's not a real video.
You're still going to have people out there years after the
event, like every single, you know, smear campaign that we
see. They're going to believe that
lie or at least think in the back of their heads.
Maybe Jacob Ward is capable of drowning puppies.

(14:54):
Totally. You know, and let's and let's
say and let's say I. Am If it ruins your life, you
know why not. Totally.
And let's say I am a serial puppy drowner, right?
And you catch me with a camera doing it, I'm then going to be
in a position to say, that's notme.
Look, you can't tell. You can't believe your lying
eyes because clearly we're in a world in which you can't trust
video evidence anymore. This is my whole problem with

(15:15):
the latest release from from Open AI, which allows people to
make these 10 second videos thatare all set for, you know,
they're perfect for social mediaand they're already flooding my
social media, you know, like, I'm like, what is the purpose of
this? What is it that we needed?
What problem is this solving? Right?
Like, I think we are already notdoing a great job with hanging
on to what's true and what's not.

(15:36):
And, and to totally it, it just feels like the, it's almost as
if the objective is to absolutely destabilize what we
can trust and what we can't so that there won't be any video
evidence tomorrow. I mean, as a journalist who
believes that there really is a way to to prove a thing, You
know, one of the number one waysyou prove a thing is you catch
somebody on camera doing that thing.
And if that's going to go out the window, I don't know you

(15:57):
guys like I'm not sure what we're going to, how we're going
to agree on anything anymore. Yeah, yeah, I mean, I obviously
I see that that's like an issue,but I, I don't even see that as
like a major issue personally, because I think that most of the
society is kind of already at that point, even without AI,
like nobody believes anything, right?
Or, you know, you have all thesedifferent like factions of ideas

(16:17):
and people can't agree on anything no matter what, even if
you do show them a real video ofsomething, right?
Like it doesn't matter at this point.
So my, my bigger concern is thatthe use of AI to sift through
and like organize data on us, right?
So I know that there's like a big, we all know this, right?
Like we're all being tracked in every way, shape and form,

(16:38):
right? Thank, thank you.
Patriot Act. They're listening to and
watching everything that's happening.
It's not happening. Yeah, literally, right.
But the problem is like they're going to take it up to the next
level, I think, right? They're going to start using
this to track like biometric data on all of us and these
types of things. The control factor from whether
it's a government or corporations like, you know,

(16:59):
compiling all that data to feed you, whatever.
That's my bigger concern. The the deep fake stuff is like
whatever, nobody believes shit anyway.
So, well, yeah, I'm not quite asI'm not, I'm not as nihilistic
as you are about about truth. And I like to believe, you know,
I'm trying to do this job still of sifting some truth out of the
world. But I, I absolutely agree with
you that like that trust is at an all time low around that

(17:19):
stuff. And so it's going to get even
worse in that case. But I 100% agree with you that
there is about to be a huge surveillance problem and that
problem, you know, so I, I thinkthat the, the, the government
version of it, even if you put that aside and, and I'm not
arguing that, that that's somehow not something to worry

(17:40):
about. But like, I think mostly about
corporate actors these days, right?
Because the the pipeline of datafrom the incredible amount of
information that we have volunteered to these companies
in the last 10 years and how that is about to get poured into
these funnels that can find patterns in huge amounts of data

(18:01):
has created an incredible surveillance opportunity for
these companies, right. So like Once Upon a time, the
big breakthrough, Mark Zuckerberg's big breakthrough,
was this idea of revealed preferences was the term of art
in the social media world, whichis, you know, if I look at that,
you know enough, if I look at you enough, Jeff, and I look at
enough other people who are similar to your behavior, I can

(18:25):
pick out of your collective behavior some revealed
preferences is what they call it.
And that just means like the stuff where if I asked you in a
poll, hey, are you into this kind of thing?
You'd be like, no, I like this and this and this.
But your unconscious mind, you know, just can't help but look
at XY and Z when it's shown it and it's, you know, it's the

(18:45):
equivalent of like you drive past a car accident.
Nobody would ever say in a, you know, in a survey, I like
looking at car accidents, you know, but it is clear from your
behavior, the behavior that you should exhibit on the social
media platform that you can't help but look at that, right?
So you you pour enough of that kind of revealed preference data
into a social media platform andthey start to build right

(19:07):
engagement algorithms based on that.
And that's how we got into the trouble we were in to start
with. Now you're going to have this
incredible ability to find meaning in that data and
predictive forecasting that's going to let you say this
person's going to be way into this and way into this kind of
thing. And it's not even going to be a
question of funneling, you know,content that you, you know, that

(19:31):
someone has made to you, they'regoing to be able to just custom
generate it, right? So for me, that's one huge
problem. And then, like you say, the
surveillance part of it, I mean,already there's technology out
there that, you know, can turn Wi-Fi signals into an ability to
tell how many people are in the room and who's moving around.
And you know, the the incredibleability to watch our behavior on

(19:53):
an ongoing basis is just going to be on precedented and and the
fact that it's all going to be done for that it's going to be
done for money almost makes me more worried than if it were
done. For some reason.
I'm I'm more worried about it being done by corporations than
I am by governments. But that's my sure little thing.
I'm with that. I mean, for sure, you know, I've

(20:14):
been seeing me and my other codes.
We talk a lot about this thing that's been talked about a lot
recently, which is kind of weirdpredictive policing.
Have you Have you heard? This yes Pred Pol.
I have a whole part of my book about that.
You know, like to me, like that ties into all of this data
collection and the surveillance state stuff.
So like, it's literally going tobe like minority report in my,
my worst case, right? It's like they're going to just

(20:36):
look at you and be like, hey, and we, we can predict based off
of all these things that you're talking about in five years, you
you're probably going to commit some kind of crime and they'll
be able to somehow work that into legislation and like, it
becomes like a whole new world, literally.
And here's the here's the thing that I get really worried about
with that is that, you know, oneof the, one of the themes that I
come back to a lot in the book is this concept of
anthropomorphism, right? Which is the, the, the, the

(20:59):
assuming that a really sophisticated, sorry, assuming
that a system you don't understand is somehow really
sophisticated. So just because you don't know
how it works, you assume it's right is just it's just a
natural human tendency. This is one of those greatest
hits of the last 50 years of behavioral science was learning
that one. In that case you just you have

(21:19):
these. So that's the thing about
predictive policing right is that it it delivers.
This is true also of like facialrecognition systems is they
deliver a you know, a a conclusion for an arresting
officer that you know, it doesn't explain itself, doesn't
say how it came to that conclusion and saves that person

(21:41):
a huge amount of work and that and that be is an irresistible
combination. And so you wind up in these
systems where so pred poll LA Los Angeles used this for a long
time until it was basically shown that it had all kinds of
biased results because it was leading what it was essentially
doing was kind of pre inditing sections of the city based on

(22:04):
past patterns. And as a result, a kid who just
happens to live in that neighborhood is going to get
grabbed by the police way more often.
You know, a perfectly innocent kid is going to get is much more
likely to get accidentally grabbed by the police.
And so eventually LA has now cancelled their contract with
this these systems. But for years they were, they
were using it and it was becauseit, everyone just kept saying

(22:25):
it's a neutral system, It's a neutral arbiter.
It's a, you know, but the truth is no one really at the front
lines of operating that system knows what it's doing, how it's
making its choices. And it's so convenient.
No one wants to resist it. So that's my other worry about
this. You know, I really, I think
we're not great at saying, you know, 1 stat I, I bumped into,
in the, in the book is there? I, I looked at this big study of

(22:48):
a, the big healthcare records company is a company called Epic
that it deals with your electronic medical records.
And they created a predictive system that will help
cardiologists make a priority list of their patients that day
based on who, who's most likely to have a cardiac event that

(23:09):
day. And The thing is, it works.
It works pretty well. They're getting out in front of
cardiac events early. These doctors are, are able to,
to get in and make interventionsearly like that.
You know, in a lot of ways you can't argue with it because it
really works well. But here's where I get worried
about it. I said, well, I asked the, the
makers of it, well, how often like your, your doctors must be
irritated to be told what to do by a system like this.

(23:31):
And they said, oh, no, no, they're thrilled.
They're thrilled about it. Like they don't want to have to
make that choice. They love being told what the
order is. And I was like, oh wow, has
anyone? And, and how often do they ask
how it makes decisions? And they were like, one guy
asked once and that's it. You know what I mean?
And so like, so there but for the grace of God, right.
You might have a system that works really well.

(23:52):
You might not, you don't know, but people are only too happy to
let the decision get handed off to these systems.
This is our what our wiring is about.
We don't like to make choices. We like to be able to offload or
outsource our decision making. It's part of our human
programming. And this is the ultimate
outsourcing system, and yet we have no idea how it works or
whether it's doing a good job. Do you believe?

(24:14):
I mean, that's basically a form of triage, isn't it?
Yeah, right. And and like again, like you'd
want that to work well. But there was AI had a, a
physician once explained to me that one of the number one
causes of malpractice lawsuits is a failure to look at a
patient's back. And they'll then get bed sores

(24:36):
and and no one it finds out, right.
And the theory going about that is that the doctor is too buried
in the chart. The doctor's already overworked,
right? Has 2A patient load they can't
handle. They're buried in the chart.
They just look at the vitals that are on the chart.
And then they say, you know, OK,I think it's this, let's do
this, right? And they don't, they just don't
have the time or the incentive to roll that person over and

(24:58):
check out their whole body, right?
Like it's that kind of thing. And I, I would just worry like
at every level of really essential services in this
country, you're going to have people saying, oh, I don't have
to do that anymore. The AI tells me which suspect to
arrest. The AI tells me who to hire for
this job. The AI tells me which of these
people I should date it. We're going to be using these

(25:19):
systems in ways that I think aregoing to wipe out our, our, it's
going to do to our ability to make good choices for ourselves.
What like GPS has done to our sense of direction.
I can't find my way around my own town anymore because of GPS.
Same. Is this going to be?
So I kind of go back and forth on this a lot.
You know, obviously, like it freaks me out because we're
living in this transitional period, right?
We're going to witness it from before.

(25:41):
I remember going outside and playing, right?
And then now it's like we're going into this new world
situation. But is it going to be a thing
where in a few generations in the future, maybe 50 to 100
years in the future? Like it's so well developed in
advance that it's actually incredibly beneficial, like net
positive for humanity, right? Like, we live in some kind of,

(26:01):
you know, utopian paradise, you know, an abundant civilization
because of it. You know, I mean, that is
certainly what that's, that's the vision being sold by the
companies making it right. They're all it's, it's the
weirdest time to be a reporter because they say things that in
the old days, and I'm old enoughto remember, some old days would
have been an absolute death sentence from a sort of public

(26:22):
relations standpoint. So you'll have a guy like Sam
Altman saying openly, huge numbers of jobs are going to be
destroyed by our creation, you know, and there's going to be
big scams that are going to be really scary made possible by
what we have built here. You know, to my mind, it and but
but he says it and then in the same breath, essentially says,

(26:43):
but it's going to be worth it because down the line there's
going to be this incredible, youknow, upshot, you know, upside
for humanity. And I think I would just say
like, again, I'm down for like, if, if it were up to me, I'd
make it like, let's make it likea A5 year moratorium on
commercial use and just give it to the scientists.
I'd like to have just scientistsuse it for for five years and

(27:05):
then we can start trying to makemoney off it, you know, and
let's see where that takes us. First.
Let's see if we can wipe out cancer and then we'll start
wiping out, you know, real worldgirlfriends, OK, Like they hold
off, you know, and in the case of like already there, there
isn't even evidence yet that it has any kind of real
productivity gains, right? Like there was a big MIT study

(27:28):
recently that showed that 95% ofthese companies that have
adopted this stuff can are reporting no productivity gains
at all. Like they can't figure out any,
any improvement that's been made.
So, you know, it may be that that's the case, but you also
have to think about the other kinds of things that the
leadership of these companies predict.
So one of the things that that Sam Altman also has said, he

(27:50):
said it at the in a January 2024podcast with Alexis Ohanian, he
said that that he has a bet going with his tech CEO friends
on their group chat as to when we'll see the first billion
dollar one person company. And that's like a dream of their
say it like he says it like it'sa positive thing.

(28:11):
And so I think to myself, OK, well, wait a minute.
So if the future is a handful ofsingle person billionaires,
single person billionaire companies, what are the rest of
us going to do for a living? You know, and the and the the
dream they sort of offers like what we're going to have a huge
amount of free time. I don't know about you guys, but
like, this is the United States.We don't like free time and we

(28:32):
don't like to give people free time.
We certainly don't like to pay people for free time.
You know, like to me, to my mind, there's just no, there's
no example in history of a, of atime when we've made it possible
to do less work and, and given people, you know, a living for
that. Like it's just, that's not what
we do. Sure.

(28:53):
I mean, I do also like I pay a lot of attention to like
macroeconomic stuff. So, you know, it is also
interesting to me that it seems like, and maybe this is just my
conspiracy minded self thinking too much, but it does seem like
we're going through some kind ofmonetary change, right?
Yes. So it it could be something
where they they kind of know they they foreseeing that this
being an issue. So they're trying to flip

(29:14):
whether it be into like crypto rails or whatever the case might
be. Because I know everybody when AI
started dropping, everybody was like, oh, we're going to lose
all these jobs. We're going to talk about
universal basic income and obviously like that doesn't
sound too, too good for anybody who's not just a lazy ass, right
So. Right.
I mean already, right? You wouldn't want that.
A lot of people don't want that.That's right.
Right. But if there is some kind of

(29:36):
like monetary shift globally andwe stop looking at the financial
system the way that we always have, maybe like maybe right,
maybe there could be some way tomake all that work together.
But. Maybe, I mean, you know, now
we're out past the power of, of history to be in any way
predictive. Like we don't have any examples

(29:57):
of, of, of making that, of, of that being possible.
And so, you know, yeah, maybe down the road there will be some
kind of, you know, maybe we'll be the end of Fiat currency and
we'll all be somehow, I don't know, trading sugar syrup and,
and, and not worrying about, youknow, against some digital
currency. And we'll be, you know, and
we'll be fine. My problem is that, you know, so
there's a, there's a thing called the Jevons paradox that

(30:18):
really haunts me. So back in the 19th century, guy
named William Jevons was a 19th century.
He was a he was a British economist and he was trying to
figure out why it was that the British Empire was running out
of coal, which he was recognizing was a big problem,
that England itself is going to run out of coal.
And, and his thing that he identified was this weird
paradox in which the technology at the time had actually gotten

(30:41):
better at burning coal in a muchmore efficient way.
There were these new steam engines that really had
revolutionized the efficient useof coal, and yet they were using
more coal than ever. And it's become a way of
describing that dynamic in all these different fields.
So it turns out, like the more aqueducts you create to hang on
to drinking water, the more we use up drinking water, like in

(31:03):
in example after example, the more efficient you are in using
a thing, the more you consume it.
And that's how I think it's going to be with our free time.
I think that the more it is possible to give someone free
time, the more we're going to consume that free time and take
it away from people basically. I just don't believe that
there's, there's a, a world in which you're going to, you know,

(31:26):
make a, a sort of a free flowingmonetary system and a kind of,
you know, super generous social policy that somehow can work
across borders in which we don't, in which we're, we're
cool with, with people not having to work in AI doing all
the jobs. You know, Amazon has just
announced, the New York Times just announced that like

(31:46):
Amazon's about to wipe out half a million jobs.
They're going to automate half amillion jobs, according to
internal documents at Amazon, right?
That's, that's, you know, half amillion jobs, you know, out of
like, think they employ like 1.3million people.
Like that's a lot of jobs gone. I don't think we're on a path
toward hanging out and and beingliving a happy, relaxed life.

(32:06):
That doesn't, in my experience, tend to be, you know, and the
historical lessons doesn't tend to be the result of the moves of
big companies. You own nothing and you will be
happy, as they say. Yeah, that's right.
So, well, even if we did have a a completely automated world,
we'd all end up being like thosepeople in Wall-e, just in their,
well, super fat, staring at screens.

(32:28):
Jake, here's what you're here's where we're at, right?
This is how I think about it. So everyone keeps saying, oh,
you mean the Terminator. You're worried about the
Terminator. And I'm like, no, man, I'm not
worried about the Terminator. I'm worried about Idiocracy.
Yeah. Oh, yeah.
That movie. I'm worried about wall-e.
Exactly. It is a world now. wall-e would
be OK. Like, I mean, wall-e is a little
bit like what Jeff is, is, you know, suggesting could be

(32:49):
possible, right? Like a world in which we're just
like our Slurpees are brought tous and we watch TV all day.
Like we're OK, you know? Am I right?
That's not the worst outcome, but I'm worried about, you know,
I'm, I'm worried about, you know, something between, like,
what's the Elysium and Idiocracy?
You know, like a very, you know,a very powerful gap, a big gap

(33:14):
between rich and poor, you know,an extremely sharp pyramid at
which only a handful of people get to live at the top and the
rest of us are all at the bottom.
And no one has the critical faculties to see, you know, to
figure out a solution to the problem.
Yeah. And then also the plants
obviously craving Brondo because.
Exactly. Exactly.
That's right. That's right.

(33:35):
Exactly. So how worried are you about the
concept of a technology? Well, I am worried about that.
I mean, you know, I, I, so I have a podcast called the RIP
Current and, and I was trying tofigure out what it was.
I was going to call it when I launched the RIP Current.
And, and at one point I was thinking about calling it NPC
because it's a term that you hear a lot in Silicon Valley at

(33:56):
the top level from people. So NPC, right, for anybody,
everybody, I'm sure who listens to this to you guys will, will
know this term. But non player character, right?
It's the background characters and soft in, in video games.
It's like the extras in movies, right.
And a friend of mine makes the the joke that that the people
running the top companies in Silicon Valley are like the your

(34:16):
friend who watched Akira too many times and didn't quite get
it, you know, didn't quite understand that the lesson that
movie is not that superpowers are cool.
It's that you shouldn't have superpowers and that it's bad
for everybody else. You know, and that's sort of how
I feel about the leadership of alot of these companies.
Like they is this idea, you know, that that they can joke
about the idea of a billion dollar single person company.

(34:40):
That's the definition of a technocracy, right?
That for me is an enormous red flag.
And so I do worry about that. I mean, we had a whole campaign
here in Northern California where I am based, there's a
whole group of people who are trying to build their own Tech
City. They were trying to basically
incorporate a whole part of, I can't remember which I think was
Lake County, but this rural partof California.

(35:01):
They're going to basically just like take it over if they could.
They lost, but now they've got awhole new plan to try and coop
coopt a a existing town. You know, there's a real idea in
tech circles that, like, the very smartest are the ones who

(35:22):
count, and everybody else is just kind of a user and, you
know, a consumer. And that that bothers the hell
out of me. What do you mean by they were
trying to take it over like likea 15 minute city or something
like that? They were trying to basically
the the concept. I'll do a little subtle Googling
while I can here, but there was a whole idea that they were
going to create a like a sort oftech utopian sort of city.

(35:45):
There was also the rumor mill had it for a while that, you
know, it at the beginning of theTrump administration, there was
some conversations going on around, you know, trying to
create some kind of regulation less city in which tech
companies could experiment without any regulations at all.
And in the big beautiful bill that passed at one point, there

(36:06):
was a provision that was going to make it such that there would
be no regulations allowed on AI at all for 10 years.
And states in the states included would not be allowed to
pass any regulations on AI for 10 years, Right?
So there's this just there's this clear feeling on the
leadership, on the part of the leadership, these companies that
just leave us alone, you know, let us figure it out.

(36:28):
I once interviewed the the former CEO of Google, Eric
Schmidt, and he basically said, he basically said that.
He's basically said, you know, policy makers can't be trusted
to think about this stuff. They're not smart enough.
Leave it to us. We will figure it out and then
we'll regulate it later. We'll figure out what the rules
should be because only we are sort of smart enough to do this.
This is how this is how I think they think about things.

(36:50):
Yeah, I mean, I, I kind of see that, right, going back to what
I was saying earlier about like maybe at some point in the, you
know, relatively distant future,it becomes like a utopian
situation. But, you know, going back to
what you were saying, you know, if this AI can figure out a way
to cure cancer or slow down aging or whatever the case might
be, it's not going to be cheap enough for us to, to, to use

(37:11):
right. Like those, those advances are
going to be something that only these billionaires, at least I
think are going to be able to utilize that.
So maybe that's why they have this mentality because they,
they kind of in the back of their mind, they're like, hey
man, we're going to live the next 1000 years.
The schmucks aren't, you know, So if we could just.
The longevity stuff is a is a whole cultural facet of Silicon
Valley. There's huge amounts of cultural

(37:32):
interest here at that high levelin sort of living forever.
There's a huge amount of longevity talk.
You know, you've got people who are, you know, taking endless
supplements and icing their skinand doing all kinds of stuff
because they want to live forever.
Yeah, all that stuff is really is really there.
And at the same time, I mean, I just think just you, there's a,

(37:53):
a feeling of intolerance. There's a there's a guy who
who's a big, who's very popular in tech circles here named
Curtis Yarvin. And he has a very successful
newsletter and is a very successful sort of public
intellectual in Silicon Valley. And Curtis Yarvin's whole
argument is we need a monarchy. We should have a techno

(38:13):
monarchy. We shouldn't even be doing
democracy anymore because it slows stuff down.
And, and you know, lots of people in the tech community who
believe, who are into that, you know, who are into that idea of
people like Peter Thiel, people like Marc Andreessen, who's a
big VC out here. You know, these, these, these,
there's a real cabal of people who really believe the smartest

(38:33):
people's ideas count. And everybody else is just kind
of an NPC. And I think that that is a that
is a real, which is why I want the utopian idea to happen.
But this is the other thing, right?
Like let's say they make an AI based cancer system.
I mean, you know the what they call the capital expenditures on
AI, the investment they actuallyhave to make in the data

(38:55):
centers. The amount of money being spent
on that accounted for 1/3 of thetotal growth in GDP between last
year and this year. The 1/3 of the difference
between how much we spent last year as a nation, how much we
spent this year as a nation is just in the tubes and computers
used to power this stuff. There's investing 10s of

(39:16):
billions of dollars and that money has to get made back.
So let's say they do figure out how to cure cancer.
They're not going to give it away for free.
They're going to need to make that money back.
That's what's happening right now.
And so to my mind, the commercialization of AI is going
to create this incredible pressure to make money.
And and that's not going to leadthem to utopian stuff, it seems
to me. You know you guys ever seen that

(39:36):
show Altered Carbon? Yes, you know what's so funny?
That that's those are the books I relax with, weirdly enough, I
love. I love writing.
I can't read, but the show was great.
And this is what it makes me feel like it it it makes me feel
like that that utopian AI drivenworld is going to be for those
people who live above the clouds, right?

(39:57):
It's like, like you were saying earlier, it's going to be a huge
gap between like the rest of us and them.
And you know, they might live 1000 years.
They might be able to upload their consciousness to the
stacks and clone themselves to continue like some weird shit
like that. But that, that's what I always
think about when I really start thinking about this stuff.
Yeah, totally. I think there's a, there's a, a,
you know, I, I just think there's a, there's a, like the

(40:22):
problem with our society as we've got it built right now,
right? The whole neoliberal project,
the, the, the capitalist world that we live in is somebody's
got to lose, right? It's not a world in which
everybody gets to win. Somebody wins and somebody
loses. And the problem has always been,
can you keep those things in at least a relative balance so that

(40:43):
being at the bottom end of the society isn't like living in a
medieval way. It is a, you know, an OK life,
right? We and we've had that in
American history. But this is not the path to
that. It doesn't seem to me.
So. At a certain point, we can also
talk about happier topics. You guys.
I know I am. Like I'm such a bum.
Oh, no, no, no, I. Ruin every party I go to.

(41:04):
Our listeners are sick. They love the depressing.
Yeah, you seem like a lot of funfor sure.
No, our, our listeners are, are pretty gross, like our, our
toughest, our highest, like grossing shows or topics are
ones where people are horribly killed in terrible accidents and
things like that. They're missing forever and

(41:24):
stuff like. That maybe you're my tribe,
maybe I've been. We love it.
So yeah, we've, we've talked a lot just kind of freestyling
here. But I want to, I want to shine
some light on the book. I'm a, I'm a big book guy.
So I always end up steering the conversation, especially with an
author into the topic of their book.
So I want to talk about the loop, right?

(41:47):
So which was what you published,like you said in January of
2022, you predicted the AI revolution we're seeing now.
And we talked about that pretty heavily already.
Not necessarily a prediction, but AI in general.
How about the subtitle is how AIis Creating a World without
choices. Now, could you break that down
for the audience? And how does how exactly does a

(42:07):
system designed to give us more options or a better experience
ultimately limit our choices? Well, I love this question.
So as I said, I spent a huge amount of time looking at both
the behavioral science world andthen the technology world.
So the behavioral science world accounts for about the first
third of the book where I'm trying to get people kind of a
crash course in some of the big lessons of the last 50 years and

(42:30):
and more of a behavioral science.
And one of the big ones is we asa as a species.
So a lot of this comes from a guy named Daniel Kahneman who
wrote a book called Thinking Fast and Slow.
That was a very famous book for a time.
He won the Nobel Prize in economics.
He was a psychologist who beginning in the 1970s with a
partner named Amos Tversky, created a whole bunch of

(42:51):
experiments that showed that if you basically people hate to
make choices and will take any excuse to outsource that
decision making if they can. And he came up with an idea of
this thinking fast and slow idea.
So there's a fast thinking brainand a slow thinking brain, a
system one and a system 2. That system 1 fast thinking

(43:13):
brain is this one that I, as I mentioned earlier, we have in
common with primates. It goes way, way, way, way back.
It is an incredibly powerful andwell tested decision making
system. And it is the system that allows
you to drive automatically to your kids school without
thinking about it. Yeah, your lizard brain.
Exactly. That's how we say it.
Exactly. That's your, that's your lizard

(43:34):
brain. And, you know, and, and, and
we're embarrassed typically of our, of our lizard brain.
But the truth is our lizard brain got us where we are
because you didn't need what using your lizard brain.
You don't have to think things through.
You don't have to, you know, if a snake comes into the room and
we're all sitting together, you don't go, oh, what kind of snake
is that? You know, you, you freak out,
stand up, everybody else in the room freaks out and stands up

(43:57):
and you're out. The room and the tribe is saved,
right? It's the automatic ways we make
choices, not just individually, but as a group.
We transmit when I show you a mask of horror because there's
because the room's on fire. You don't need to see that the
room's on fire. You just run.
You know, the Cedric the entertainer has a great line
about, well, he's like, when I see black people run, you don't
ask why you just start running. He's like, you can't help but

(44:19):
run it, you know, and, and, and that is exactly it.
Like that's what, what, that's how our system works.
So that's all the lizard brain stuff, our slow thinking brain,
our fat, our, our, what's calledour, our system 2, what he,
Daniel Kahneman called our system 2.
That's the cautious, creative, rational, kind of better, more
human decision making system. And the idea is that like

(44:43):
100,000 years ago, that's the system that caused, you know,
back when everyone was living onthe continent of what is now
Africa, everyone, you know, somebody stood up and said,
what's over there? And that's that part of the
brain that goes, oh, what else is beyond this campfire?
You know, I wonder what would happen if we didn't just think
about our survival, like what happens after we die?
And why did I have that weird dream?

(45:03):
You know, those thoughts are a totally new decision making
system. And that system is super glitchy
and you know, because it's untested, it's like a brand new
piece of software in in evolutionary terms.
So what the estimate is that 90%of your choices are made by the
lizard brain. And this little group of choices

(45:24):
is made by your slow thinking brain.
And my thesis in the book is if you're a company that's created
a pattern recognition system that can tell you what people
are going to do in the future, can can forecast their behavior
and even maybe shape their behavior because you want to try
and make money off of them. Who are you going to try and

(45:45):
sell to? Do you want to sell to the
cautious, creative, rational part of the brain that thinks
things through right and makes good choices?
Or do you want to sell to the one that can't help but look at
the cleavage picture, right? That can't help but drive to his
kids school by accident, right? The automatic brain that, you

(46:06):
know, was like where my alcoholism came from.
You know, that they would much rather sell booze to that guy
than they would to the guy that I am today who said, you know
what? I got to actually stop drinking
and I'm I'm going to quit drinking.
So to my mind, a system like this absolutely could create
huge amounts of choice. We would love that, but I think

(46:26):
when you overlay the need to make money off these systems
onto it, they're not going to. Why would they?
Why would they try and make us more like these people?
They're going to want to make us, you know, more like our
rational selves. Why wouldn't they want to make
us more and more instinctive? Because the instincts are so
easy to predict. Do you think that they've been
working towards that already with just the way that the

(46:47):
algorithms work, you know, TikTok brain and all that and
shortening of tension spans? Totally and I want to say like I
described this like I'm, you know, I'm not judgy about this
because I am the worst person I know around this stuff like I am
super manipulatable around this stuff.
You know, I'm the one who scrolls TikTok and eventually
the woman comes on with a TikTokbranded video that says you

(47:10):
should go to bed. Now you've you've had enough,
you know, because like when the bartender says you've had
enough, you know, like when the crack dealer says you've had
enough, you know that you were addicted to crack and he knows
you're coming back tomorrow. You know, like I, so I, I very
much write this stuff and think about this stuff not from a
perspective of like, hey, everybody, you got to be
smarter. I'm, I write it from the place
of like, I am not able to resistany of this because it's so

(47:34):
powerful, you know, and I just think like we have to what I
want everyone to do and I and what I like is that young people
are better at this. Like you, you know, Jeff, you
said TikTok brain, you know, like the fact that we're naming
that in ourselves, right, Jake, you said lizard brain like that.
We're naming that in ourselves. I just love that we're entering
the world. This is where I was where my
hope for the future comes from is like young people talk very

(47:55):
openly about like, oh, brain rot.
Oh, I got into brain rot. You know, like, like they're,
they're conscious of how their brains fall for this stuff.
And I think the more that we canarticulate that, the more we're
going to get to a place where not only are we going to be able
to say, you know, I got to make better choices for myself, which
is part of the solution, but I don't think the main part.
I think then you're going to getto a world where you're going to

(48:16):
start being able to sue on the basis of you're going to be able
to sue companies for for taking advantage of your instincts in a
way that you would never choose consciously.
That, to my mind, is is where the path out of this is probably
going to come from. I got some crazy allegory of The
Cave vibes from from the beginning of that that

(48:37):
explanation. Plato's, what was it?
The Republic. OK, tell me about that.
You. You're you're better read than I
am. I tell me I don't know my
Republic. It's just a a small portion of
of his work called the Republic.Prisoners are locked inside a
cave, and they've been in there for so long that they're only
allowed to face one wall and seethe flames from the fire that's

(49:01):
stocking the wall. They don't know how the fire
gets fed or anything, they just know that it does. 1 prisoner
frees himself, somehow breaks out of The Cave, learns that
there's an entire world outside of The Cave, comes in, tells
them about it and nobody believes them because their
whole life, basically since birth, they were prisoners

(49:23):
inside of this cave. And all they know is the the
shadows cast on the walls causedby the fire.
So like when you were talking about like Jay made the the
reference to lizard brain, then you had talked about there's a
small part of your brain where you stand up and you say, well,
what's over there? I know a lot of people, some
people, right? I'm not going to use figures,
but there's a lot of people in trying to trying to walk a thin

(49:46):
line here. A lot of people would be like,
don't be an idiot. Don't look over there, but
because there's nothing there, right?
Our whole world is right here inthis cave and our, our,
everything we know is being castin shadows on the wall.
But then you have those people that are willing to to look
outside and say, no, there's a whole other world out here.
But then eventually the people will make it out outside and

(50:07):
then everyone's so depressed andthat's no that actually, that's
a great segue into something else that I wanted to talk to
you about. Sorry, I'm kind of just jumping
topics here really, really quick, but it was a perfect
segue. So you had made a point earlier
about how we're going to have a ton of free time or that that's
kind of the the route. That's what they promised
anyway. That's what they say that we

(50:28):
will have. Do you have any fears of?
I don't even know if there's a term for it, but the the
depression that people get when they don't have something to put
their efforts towards, you see it a lot with.
Oh yeah, recent. Retirees or Oh yeah, like Jake
and I, we, we were in the Navy together.
And I mean, I went through it, right.
I got, I got medically, I got forced medically, retired out of

(50:50):
the Navy, and I went through a massive depressive cycle.
Oh dude, I had a total ego deathand now I just spend every
second that I have doing something.
Yeah, and and you know, I'm. Constantly busy.
My wife and I, we, I wouldn't say we argue about it, but we,
we talk about it all the time. You know, she's like you, you
always have to be doing something and it's like, you

(51:11):
have no idea. Like if my mind's not being
exercised by a task, whether it's at work or the podcast or
hanging out with my kids or something, I, I go crazy or I
either fall asleep, right? These guys give me crap all the
time because I don't watch movies.
I fall asleep to them. And I didn't understand 90% of
the references that were made today already.

(51:32):
But so that's that's why I like books.
I get management from books because I can physically do
something. I love that.
Right, right, right. But are you worried?
Oh, man, yes, I, I mean, I, so Ilove this.
I think you're, you're touching on something so important, so
purpose. You know, there's a whole world
of research going on right now around purpose and how important

(51:56):
that is to human satisfaction because a bunch of these
researchers are recognizing thatin the future, if, you know,
people are are going to suddenlynot be working, how will they
function, right? I mean, this is a whole thing.
It's not just about the money itbrings in.
It's the identity and the value you feel.

(52:18):
You know, people really like purpose and I really worry about
a world in which, you know, we have we have established the
idea that the value of your of aperson has to do with their
production, their productivity. How much money do they generate?
How much value do they generate,you know, is measured in

(52:40):
monetary terms right now. And if we're about to enter a
world in which the AI takes careof all of that, well, then what
is the value of that person? There was a whole, I was with a,
there was a think tank that I went to once.
That was all they were trying tothink about was how do we come
up with a new way of, of describing the value of people
beyond the money they generate? And to my mind, you know, so I

(53:05):
think that it's one of the big problems with what you're
describing. And it's one of the big
problems, I think with people who say, oh, we're not going to
have to work as hard in the future or we're not going to
have to work at all in the future.
That feels like a real like highschool freshman's idea of what,
you know, a sort of perfect world is supposed to be.
I think we're going to need workand purpose.

(53:25):
I think that's a huge amount of of a huge part of, of what has
been satisfying to about humanity.
I mean, you know, we were talking about this primitive
brain, you know, lizard brain and our higher functions, the
higher functions. One of the examples I always
give around higher functions andwhy we're it's so I'm so proud
to be a modern human is all the stuff we've built, you know, the
highways and the cathedrals and the bridges, like it's

(53:48):
incredible and, and the pride people take in having done that
stuff. So I just think we're going to
need, we're absolutely going to need that in the future.
And if we don't have it, becausewe think somehow people are
going to be happier if they don't have to do any work.
I think misunderstands how our economics work, as you know,
Jeff, and misunderstands how people find happiness in life.

(54:09):
I'm surprised that Jeff hasn't brought up that it.
I mean, it seems to me that it'sthe promises of a more relaxed
society, but people are so forgetful that just what was it
four or five years ago, everyonewas locked inside their homes.
They couldn't go to work. They couldn't do anything,

(54:31):
anything of enjoyment, which youget do get a tremendous amount
of satisfaction from work in general, you know, whether
that's working with your hands or actually doing a job, right,
whatever it might be. But, you know, those sorts of
things decreased and our mental health issues spiked through the
roof, right? And so maybe, you know, the
whole grandiose idea of like, oh, you know, less, less time at

(54:55):
work and more time for yourself and all this sort of stuff, but
with the the knowledge that it'sgoing to lead to insane mental
health crisises. And why why not?
Right. Look, you know, who would want
that? Well, you know, I don't know.
But why? You know?
This is why, This is why it bothers me so much how casually

(55:17):
you have business leaders right now saying, oh, we're just going
to not need people, you know, like they're the damage that
that will do to society if they really lay off people the way
they seem to want to. Whereas at the same time, like I
was mentioning earlier, this guywho says he could do away with
lead poisoning and kids if you gave him an AI system.
He also has a program, I think it's for the state of New
Hampshire. I can't remember.

(55:37):
But basically he's got he's got a system that's that takes new
arrivals in the state of New Hampshire, people who have just
arrived there, whether they are immigrants or removed there from
another state. And once they register, get a
driver's license or whatever this system pushes to them, Hey,
would you like to be a bus driver?

(55:58):
Would you like to be, you know, a sanitation worker?
What about this, you know, because they're all these jobs
that the state of New Hampshire needs filled that they can't
fill snow plow drivers and all this stuff, right?
To my mind, like stop talking about getting rid of paralegals
and entry level bookkeeping and all that stuff.
Don't do that. Don't wipe that out.
We're going to need that job. Instead, use AI to find people

(56:20):
work, you know, use AI to pair people with the services they
need. That's what we're going to need
much more than than you know, I think I feel like the idea that
we need to be more efficient feels out of whack.
That doesn't, I think we've got no place where the top people in
America are making enough money.They, I think they should take a
little break on the money makingand let's let's get some people

(56:42):
some help. Let's use the assistance for
that instead. There's definitely a concern for
a lot of these jobs. I feel like, you know, going
away with automation and AI and stuff, but I still, you know,
I'm, I'm like a, I'm a blue collar worker, right?
Like, I go out and I do tasks that I don't see.
I think about this all the time,right?
I don't see how an Elon robot can do half of the jobs that

(57:04):
I've done in my life because there's certain things that you
just couldn't program into, you know, to do some of these tasks.
Now, when you're talking about, you know, a lot of jobs, sure,
right. Artists, even musicians, like
that's going away with some of the new AI writers, you know,
customer service, like a lot of these things, retail, delivery

(57:25):
services, taxis, like, yeah, allthat could go away.
Waste management, like that can all be automated.
But when you start talking about, you know, like service
industry, like blue collar service industry jobs, I don't
see how like, a robot's going tobe able to do 90% of those jobs.
So it could be a situation wherea lot of these people do lose
work, but there's so much that'sgoing to be needed in order to

(57:47):
like maintain the infrastructureof society that just people just
don't want to do, right? Because it's not air
conditioned, it's not comfortable, it's hard work.
So sure, you might have like this this problem, but again,
kind of looking out into the future a little bit, it may
become a thing where it's like we don't actually have more free
time. We just have more people doing
things to upkeep the infrastructure that robots can't
do. Yeah, I hope that that is true,

(58:09):
that there is, that there can somehow be a move toward, let's
say you know, enough people in the trades that you could have
that it could sort of balance out.
But one, one problem I have heard a great deal.
There's AI. Was just talking to somebody the
other day who's doing a bunch ofresearch right now around the
thesis that AI could conceivablydo to women who never go to

(58:32):
college what foreign offshoring did to men who never went to
college back in the years and early 90s.
Because for a huge number of women who don't have a college
education, there have been a whole set of knowledge jobs,
bookkeeping, clerical admin, right, that can lead to a really

(58:55):
good, reliable paycheck for a long time and can lead to, you
know, a real career that then winds up even leading to a
really stable retirement. And these are the jobs that
these companies are talking about wiping out completely.
And so that's something I reallyworry about.
I think you're right that like there could be a push toward the
trade. You know, I was just talking to

(59:16):
a guy the other day who's was really lamenting that his
daughter doesn't want to become a lawyer or an engineer and she
wants to be an artist. And I was saying to him, like, I
don't know, man, have you seen what's going on with lawyers and
engineers? Like they are not they're
getting fired left and right because the entry level versions
of those jobs are going away. But, but a truly like
disciplined, creative person that could actually be a, you

(59:36):
know, a valuable skill in a whole new way.
But I'm always like, I'm always collecting like because I, I,
you know, I worry about my own ability to make money in the
future. Like, I really don't know, you
know, and I, and I'm too old to go become a journeyman
apprentice electrician. So I can't, you know that.
And there's a lot of people in my position who only have like
15 years left in their working lives who aren't going to be
able to, to make a transition like that.

(59:57):
So there's a real, you know, real trouble there.
But I'm always keeping my eyes open for like, like, what is a
gig that that is AI proof? I met a guy the other day at the
airport and, and we're sitting waiting for a flight together.
And I was like, what are you doing?
He was like, I own a chain of Barber shops And I was like,
yes. That's fantastic.
Tell me about that. You know, like, like so I, I

(01:00:18):
agree with you, Jeff, I, I thinkthere will be a lot of jobs that
won't get replaced, at least notin the short term because it
just doesn't make enough money. Like it wouldn't be, it wouldn't
be a cost savings to automate that work.
But you know, if you look at the, at the original filings of
Uber to the company, the original stock filings all say
from the very beginning, we're going to automate this work as

(01:00:40):
quick as we possibly can, right?The driving Uber is going to be
absolutely a robot's job in a few years, because that's the,
because they can do it more cheaply that way.
And so as long as they can do itmore cheaply by automating it,
they will. And I, and I think it and, and,
and This is why I think one of the ways, you know, where the
subtitle of my book is how it's Creating a World without Choices
and How to Fight back. We're seeing around the world,

(01:01:01):
people are fighting back. So AI sorry, in India, they have
outlawed self driving cars because a huge number of people
in India make their job as drivers.
They have said it's illegal to make to have robots do that
because they know that it'll just put so many people out of
work. And, you know, we may have to
make that kind of choice here inthe United States, I think.
Yeah, for sure getting the trades people.

(01:01:23):
You ain't going to catch no Elonrobot doing underwater welding.
Not for a long time, you know. That's funny.
I bet. I bet if you had Elon on, he'd
be like, oh, I totally want to make a robot that does that.
It's funny too, because I'm sitting here thinking about it
from my job because I do avionics work on airplanes.
Yeah. Certainly there wouldn't be a
robot that would, you know, takemy job.

(01:01:46):
However, they could use AI to figure out, you know, why are
half the lights not working on this, you know, in this cabin of
this aircraft and completely eliminate the need to pay
someone who specializes in. Avionics, right?
Has experience. That's exactly right.
So I mean, this has, this is, that's right.
That's right. We've seen, yeah, they all say

(01:02:08):
threw a ludicrous number at me because they're like, yeah,
well, you got, you know, 10 years of experience working at
on avionics and that's a specialty role.
But if you could get a, you know, a computer program to just
find what's probably the solution to fix this, then you
could have any monkey with a wrench, you know, figure it out
and do it right, So. And like, out of the little
drawer comes the exact piece machines that you need.

(01:02:31):
And just put it here, turn it three times.
OK, Take lunch. Yeah, that's right.
That's right. That's right.
And we'll see. Maybe so again, like, yeah,
right. The value of people has got to
start getting calculated in a, you know, form other than just
how much money they can bring in.
We're going to need to start defending purpose and human

(01:02:53):
satisfaction, I think in some insome new way.
And we just don't have a lot of history of that, you know?
Yeah. Well, ready for a wild question?
Yeah, let's go. Do you, I don't know if you've,
you've dove into this at all, but you think that there's any
connection between AI and, and the recent massive increase of

(01:03:21):
UFO stuff that's going on right now, whether it be whether it be
through mainstream media, through independent outlets like
us or anything and everything. That's really interesting.
I don't know if I believe so. I'll just say I don't know
anything about it fundamentally like I, I, I, I hadn't, I didn't

(01:03:42):
know about the uptick in reports.
That's interesting. I, I think that there is a new,
we're in a new information ecosystem where every single
person is kind of a self appointed watchdog for weird
stuff. And that is, I think in a lot of

(01:04:03):
cases good. There are a lot of good things
that have come out of that And and there are also places in
which that creates just an incredible, you know, ecosystem
for dangerous conspiracy conspiracy theorists that that,
you know, get us into trouble asa society.
I think. So I wonder if like if I were

(01:04:27):
to, if I had to guess, I would guess that it is, you know, it
might be a function of just how easy it is to, to report
evidence of a thing and have that thing analyzed and
amplified by lots and lots and lots of people.

(01:04:48):
It's, you know, one of the things that we've seen a lot of
in the last, you know, like what, what we've seen time and
again is that when you can measure a thing more
effectively, the rates of that thing go up because we can
measure it more. And I just wonder if it's just
because there are so many peoplewith cameras filming the night
sky that you're either seeing, maybe they're really seeing

(01:05:09):
something or at the very least the incidents of people who
think they have seen something has gotten a lot higher.
Yeah. I don't know.
I'm speculating here, but that'sthat would be my guess.
We. Can get in some real brain rot
topics here, we can tell you now.
I asked because one of the primary theories going around
right now is that UFOs is AI Incarnate, like a physical form

(01:05:34):
of AI creating either itself to move around inside the physical
space, which also delves into, you know, time travel and and
other real fringe topics, right?I mean, we, we dive into all of
it here. We really do.
So I was just wondering if you had heard anything like that or

(01:05:55):
had a had a point. I haven't the only the only
perspective I'll offer on on UFOs and extraterrestrial life
that has always stuck with me isa thing that years and years and
years ago team of academics who study space were explaining to
me about the sheer size of spaceand also how old the universe
is. And as a result.

(01:06:16):
So there's this, there's this thing, the Fermi paradox, which
you probably know about, which is right and Enrico Fermi,
right, one of the fathers of theatomic bomb.
He just would idly chat with hislunch group and one of the
things he would always ask is where is everybody?
Where is where are where is alien life elsewhere in the in
the world, in the universe, because clearly there's so much
potential for it to be there. Why isn't it out there?

(01:06:36):
Well, and the best answer anyonehas come up with that makes
mathematical sense to answer theFermi paradox is it's not that
there isn't extraterrestrial life out there, it's that the
universe is so old and the amount of time in the universe
is so vast that the chance that out of all of these stars, 2

(01:07:00):
civilizations would exist at thesame time is very mathematically
small. So the idea that in all this
darkness that a, that our, the single match flame that is our
civilization, right? That that that lights and is
extinguished instantly in the time scale of the universe, the

(01:07:22):
chance that two of those would be lit together at the same time
such that they overlap and actually see one another is
incredibly tiny. So for me, that's been my,
that's something I've hung on toin my career for a long time.
When when people talk about extraterrestrial life is I think
to myself, it makes absolute sense to me that it's out there.

(01:07:45):
It just may have already happened or hasn't happened yet
such that we would ever encounter it.
Yeah, you've actually touched onmultiple answers to the the
Fermi paradox there. I I always like to plug books.
Again, if anybody's interested in a really good read for
answers to the Fermi paradox, there is a book by Steven Webb.

(01:08:05):
It's called If the Universe is Teeming with Aliens, Where is
everybody? 75 solutions to the Fermi
paradox. Oh, cool.
And it's, it's probably one of my most read books.
I, I constantly go to it and useit for reference.
Cool. And then, of course, just to tie
it all in the 76th solution was actually a recent thing, which

(01:08:30):
is the, the, the one that's brought up in the third body
problem series by author of 6 and you where he talks about
the, the dark forest theory. That's that's another one that's
not covered in Steven Webb's book, but that's also very
interesting, which you, you didn't necessarily touch on that
one. You, you more or less there.
There's a, there's a answer called the island, right?

(01:08:53):
Where we're just alone on an island and all we have is
basically, if you look at the earth, all we have is the
materials here on the earth. We don't have the ways of
manufacturing what we need to beable to bend space-time and
travel vest distances with it all within one lifespan, right?
So where we are is where we're going to be at.

(01:09:14):
Distances are too, too far. Time is too limited.
Unless you can go through and somehow manipulate one of
Einstein's theories of relativity via, you know,
bending of space or black hole manipulation, you're not going
to really do much. And even that is theoretical at
best. Yeah, yeah, yeah, yeah, right,
right. I'll just leave you with this,
an idea that that I started the book with, which is the idea of

(01:09:36):
the generation ship, which is this concept that gets kicked
around at NASA. And there's a couple of science
fiction books that have have taken this idea on.
And it's the whole idea that so,so the nearest habitable planet
to us, the one that they think we actually could walk around on
and, and possibly breathe the atmosphere is called Proxima
Centauri B. And it's only like 4.3 light

(01:09:59):
years away. It's right down the block in
terms of, of, you know, of, of being nearby.
The trouble is that 4.3 light years away at the current speeds
we can travel in space ends up being something like it's more
than 100,000 years. It's like 200,000 years.
So the concept is that you'd have to have people on that ship

(01:10:21):
live and die and have babies andcontinue to create a culture
that just lives right on that ship for that whole period of
time. It's like 2000 human
generations. And so I've used this as like
it, my example of like where we're at on this planet is like
that amount of time is essentially the entire time that
we've been that species that walked off of the continent of
Africa and, and, you know, became the modern selves with

(01:10:44):
our better brains. You know, the possibility of
actually living all that time ona, a single ship is crazy.
And instead it makes me think weare on that ship.
Like this is our ship. This is the one we get.
We're on it already. And we got to figure out how to
do a better job of thinking in advance, thinking in
generations. You know, you guys were asking

(01:11:05):
about like what's going to happen in a few generations,
right? Like, like, we need to think
that way. We're not thinking that way
right now. We're thinking in financial
quarters and we need to be thinking a lot longer than that.
It seems to be. Proxima Centauri is, is very
interesting just for food for thought for people who are
listening. Proxima Centauri was once
considered a dual star system. And then they they end up

(01:11:27):
finding 1/3 traveling star that actually orbited the two that
are orbiting themselves. And then, yeah, that's I think
that's fascinating. Yeah, yeah.
Someone in the chat said that and then when the ship gets
there it's just a bunch of inbred.
Totally. I saw that.
So I have a, I have a like half written screenplay in which
that's the exactly the concept. Did you guys ever watch in the

(01:11:50):
name of the The Name of the Rose?
It was an early Sean Connery movie.
It's a great movie about medieval.
It's a murder mystery set in a medieval monastery.
And everybody is super inbred. So they're all just like, you
know, and you would just imagineat the very end of that trip, as
they're approaching the planet, they're all just going to be
like so messed up, you know, it's so gross.
They're going to look like. Sloth from The Goonies.

(01:12:12):
And they're all like, naked because all the, all the, you
know, clothing is melted away. You know, like, that's right.
That's right. Exactly.
It'd be a messy party. Well, was there anything that
you came on to try and specifically talk about today so
we're not wasting all of the time on on fantastical stuff
about UFOs and spacecraft? No, we've done it to you guys.

(01:12:33):
I really, I love hearing your perspectives on, you know, the
macroeconomic trends here and on, you know, whether these,
these systems really can replacejobs as we have them right now.
No, I think we've done a really good job.
I would say, you know, if you don't mind me promoting my own
stuff. Yeah.
The ripcurrent.com is where people can sign up for my
podcast if they'd like to listen.
I do a weekly interview with somebody who, you know, the RIP

(01:12:54):
current is named for, like the invisible forces that I think
are kind of working on us right now.
And that stuff like, you know, the, that's like it, that's,
that's, you know, all these weird political trends moving us
here and there. It's economic forces.
So each, each week is a different sort of expert in
something along those lines. So it's, I think it's very

(01:13:16):
similar to what you guys are thinking about here.
And I really appreciate the opportunity to be here.
No, it's awesome. This was this was a blast,
really. Yeah, good episode for sure.
Appreciate you coming on. Thanks guys, I really appreciate
your time. Hope to do it again.
Absolutely anytime I'm going to go ahead and sign this off if
you could hang out for, you know, two or three more minutes
afterwards. That way I can just make sure

(01:13:37):
all of your audio and video are are sent over.
Usually it's it's really quick, but every once in a while that
just takes a few extra minutes. I've been there.
I'll absolutely stick around. All right, Jacob, is there
anything else that you would like to plug or anything other
than the RIP current? Again, the ripcurrent.com come
check us out if you can. For some reason, my, my biggest

(01:13:59):
following is on Tiktok. I have a huge Tiktok following,
which is for an old guy, it makes no sense to me at all, but
I really enjoy that community And so come check me out there.
That's where I'm most active trying to grow my YouTube, but
I, I don't even, I'm not doing asmart job of that.
So I got to, I got to shift toward that if I can.
But yeah, ripcurrent.com, that'swhy I'm mostly.
We do have one question from thechat that popped up the more

(01:14:21):
asked what got him in the conspiracies.
How long ago? You know, so when you're the,
you're the editor in chief of Popular Science magazine, you
really can't, you really can't avoid them.
You know, you'll have people, it's very interesting.
You'll have people full of, you know, Popular Science.
The readership is, you know, it tends to be a lot of military
people. We got a lot of Navy, you know,

(01:14:41):
I think a lot of people, you know, you have people serving on
a submarine, right, who are who are at sea for a long time.
And that's kind of one of their only forms of entertainment is
magazines. So they'd have us with them.
And so just a really interestinggroup.
You know, a lot of people would come to us with a lot of various
interesting questions. And so I think that got me, that
got me going on it. And then you combine that within
all this reporting I've done around kind of, you know, why

(01:15:03):
certain ideas really stick in the brain, and that brings you
into contact with a lot of folkswho think about conspiracy
theories as well. Cool, You put a smart guy in
front of this information. It's just natural and you just
become a conspiracy theorist andso.
That's right. Well, good.
We'll, I'll make sure to take you in all the TikTok clips that
we get from this episode and. Appreciate it.
We'll spread, we'll spread the word.

(01:15:24):
I can't wait for your book to come in.
I'm sure I'll have a ton of questions afterwards.
So if you ever want to come backon the show, you're more than
welcome, man. Sounds great.
Yeah, Just call me up. I'm around.
All right, guys. Jake, you got anything for Jake?
No, but this, that this whole thing has been tripping me out a
little bit. Jake.
Jake. Between this and your lazy guys,
you must be having a trippy evening.

(01:15:45):
Yeah, I'm ready for bed too. It's my bedtime.
I got to go work on planes tomorrow.
Good, good. But no, it was a it was a cool
episode. Definitely not what I expected.
Yeah, I would love to have you back on.
That was interesting to hear your insights.
Sick thanks guys and. Another Californian is cool too.

(01:16:05):
I'm from Fresno. Oh, see.
Nice. Awesome, awesome.
And then next time, we'll make sure we have some some even
Wilder questions, Something to make you real good, right?
Yeah, you weren't easy on me tonight.
Yeah. All right, well, that has been
another episode of the Infinite Rabbit Hole podcast.
Until next time, travelers, we'll see you in the next fork
in the path of the Infinite Rabbit Hole.

(01:16:26):
Bye, buddy. Goodnight.
Hey everybody, thanks for checking out the Infinite Rabbit
Hole podcast. If you're looking for more of
our stuff, head on over to infiniterabbithole.com where you
can find links to all the podcast players that we are
available on and even our video platforms such as TikTok and
YouTube. While you're there, make sure to

(01:16:47):
check out all the links for our socials and hit that follow so
you know when all the new stuff from our podcast comes out.
And until next time, travelers, we'll see you right here and the
next fork in the path of the infinite rabbit hole.
Advertise With Us

Popular Podcasts

Stuff You Should Know
The Joe Rogan Experience

The Joe Rogan Experience

The official podcast of comedian Joe Rogan.

On Purpose with Jay Shetty

On Purpose with Jay Shetty

I’m Jay Shetty host of On Purpose the worlds #1 Mental Health podcast and I’m so grateful you found us. I started this podcast 5 years ago to invite you into conversations and workshops that are designed to help make you happier, healthier and more healed. I believe that when you (yes you) feel seen, heard and understood you’re able to deal with relationship struggles, work challenges and life’s ups and downs with more ease and grace. I interview experts, celebrities, thought leaders and athletes so that we can grow our mindset, build better habits and uncover a side of them we’ve never seen before. New episodes every Monday and Friday. Your support means the world to me and I don’t take it for granted — click the follow button and leave a review to help us spread the love with On Purpose. I can’t wait for you to listen to your first or 500th episode!

Music, radio and podcasts, all free. Listen online or download the iHeart App.

Connect

© 2025 iHeartMedia, Inc.