All Episodes

October 12, 2025 51 mins

Adam is back and is now a bona fide expert on Artificial Intelligence. When it comes to conspiracy theories, it’s both our nemesis and our ally. Plus, a prediction for the upcoming misuse of The Insurrection Act.

Mark as Played
Transcript

Episode Transcript

Available transcripts are automatically generated. Complete accuracy is not guaranteed.
Speaker 1 (00:01):
I'm John Cipher and I'm Jerry o'sha.

Speaker 2 (00:04):
We have over sixty years of experience as clandestine officers
in the CIA, serving in high risk areas all around
the world, and part.

Speaker 3 (00:11):
Of our job was creating conspiracies to deceive our adversaries.

Speaker 2 (00:15):
Now we're going to use that experience to investigate the
conspiracy theories everyone's talking about as well as some of
you may not have heard.

Speaker 4 (00:22):
Could they be true or are we being manipulated?

Speaker 2 (00:24):
We'll find out now on Mission Implausible.

Speaker 4 (00:31):
Welcome back, Adam, John, and Jerry. I haven't seen any
of you for a long time, but we're now entering
a new season and there's been a lot going on
since the last time we all did an episode a
month and a half ago.

Speaker 2 (00:43):
I hate to say I missed Adam.

Speaker 1 (00:44):
Yeah, I have missed you guys. I mean, I guess
I can just say. We normally make fun of our
each other, but this is kind of obnoxious because it'll
make it hard for you to make fun of me.
We're dealing with that big health issue in my family
for much of this year, but we're on the other
side of it, and my wife had to have a
lot of surgery and stuff. But she's now like really
recovered and we're feeling wonderfully and.

Speaker 2 (01:07):
We've had her on the show, she's interviewed us. We're
so glad she's doing better. But did you make more
money this year while you were dealing with a health
crisis than you had him in the past. It seems
like you're all over LinkedIn and you're doing this AI
and you're speaking to groups.

Speaker 4 (01:20):
You're always jumping on the new fan at him.

Speaker 1 (01:22):
I am, so I don't think I fully understood what
I was getting into. So after I did the Freakonomics
series about AI, I became pretty close with Ethan Mollick,
who has become this like great AI mind, and we
decided to start an AI consulting business with a couple
other people, his wife Leelach, the wonderful Jessica Johnston. So yeah,

(01:43):
we worked with a bunch of like big companies on AI.
This might shock you. I'm not the world's most technical
expert on AI. It actually it's interesting. I don't think
I realized this until fairly recently. But Planet Money, which
I created back in two thousand and eight with Alex Bloomberg,
was the first major American media company podcasts. As far

(02:03):
as we can tell, we haven't been able to find
another one. And I was really on the front lines
of digital disruption, like I was atn NPR deal like
trying to figure out how podcasting would change things for
several years. Then I went to the New York Times,
where I was mostly a reporter, but I would report
up to the senior people and talk to them about
digital disruption and media. And then I was at the

(02:25):
New Yorker did the same. Then I ran this podcast
company with Sony, and throughout that I became really fascinated
by how like technological change impacts society and impacts companies.
I mean, you can talk about whatever railroads and electricity
and the telephone and the internet and mobile and a
million other innovations, and there's all these fascinating things that

(02:47):
happen when a new technology changes work. And so a
lot of these big companies are like, all right, Like
a year ago they were like, is this really that
big a deal? Now I think most are like, Okay,
this is a big deal, but what does it mean?
How do we deal with it? What we do? And
so it's been this fascinating front row seat into it.
It's also a fascinating front row seat into among many
other things, probably the world's greatest. It's weirdly like probably

(03:11):
the best tool ever for conspiracy theorists, but it's also
may arguably the best tool ever for fighting conspiracy theories.

Speaker 3 (03:19):
Can AI make John smarter or sound smarter?

Speaker 1 (03:22):
The stats show that lower performers benefit more than higher
performers woo, But the higher performers still stay ahead because
there seems to be like an intelligence AI premium, so
it's not gonna be as obvious how amazing I am
compared to you.

Speaker 2 (03:40):
Heads Well, Jerry and I spoke to a NYU professor
just the other day, Thomson Shaw, and we'll have an
episode with her, Yeah, coming up, and she talks a
lot about how AI and some of the tech really
does help conspiracy theories because you can essentially just put
a series of events together, list them, and then just
click and it'll create these connections, and then you think

(04:02):
you've done this genius research and found this incredible connection,
and then of course that gets into the ecosystem and
gets spun in.

Speaker 1 (04:09):
It's really good at being persuasive, and it's very good,
and it doesn't care, right, It just it just wants.
It's like a talking dog or something.

Speaker 3 (04:19):
It just wants.

Speaker 1 (04:20):
What do you want? What do you want? Give you
what you want? Although there are some interesting studies that
it's also the best tool to get someone out of
a conspiracy theory.

Speaker 2 (04:28):
But do you have to put in the right things?

Speaker 1 (04:29):
Yeah, you have to put in the right thing.

Speaker 3 (04:31):
But going from conspiracy theory to conspiracy genuinely, there's this
word that like almost no one can spell, is like algorithm, right,
so nothing comes from nothing.

Speaker 1 (04:40):
Most people can spell it, just not l G. There
are two whys. That's what's confusing.

Speaker 3 (04:47):
But if you own if you own the algorithm, or
if you design the algorithm, if you can influence the algorithm,
you can influence millions of people. And arguably the algorithms
that we all deal with every day that are persuasive
actually belong or engineered by, or are really influenced by

(05:07):
a small cabal of wealthy individuals. There really is like
ten people in the United States who are responsible for
the algorithms that generate AI and what we what's put
in front of.

Speaker 2 (05:21):
Us to be And luckily they're excellent people.

Speaker 4 (05:23):
For this, I recommend a book I've been reading called
careless people about Facebook doing exactly what you're saying, not
through AI, but through very intentional engineered programs. It bears
out in great detail, not just the ten people that
are creating an algorithm to change the course of events,
but the one person.

Speaker 1 (05:42):
Yeah, but can I be pedantic for a moment or
for the rest of our relationship?

Speaker 4 (05:46):
That's the default?

Speaker 1 (05:47):
Because this is not saying you're wrong, but it's so.
The way AI fundamentally works is you just put more
and more training data in. It does all this linear algebra, uh,
and you just push it against more and more of
these chips that they're building as fast as they humanly can.

(06:08):
And so it isn't algorithmic in this Obviously it's like
software code, so it's algorithmic in that sense, but it's
not like linear programmatic. The people who make the model,
Sam Altman and those people have no idea what it's
going to be able to do. In many cases, they
don't even know what it can do. Now, it's really
remarkable how little they understand their own tools.

Speaker 2 (06:30):
Does it can it create new knowledge or does it
just make connections what's existing and put in.

Speaker 1 (06:35):
That's a pretty huge debate. Although can we create new knowledge?
Also is a question like to what extent are we
is creating new knowledge largely configuring old knowledge. But so
it's not like with Facebook you can have a meeting
and say, all right, we want to optimize for more engagement,

(06:55):
or we want to optimize for the things. You know,
they're optims our not conspiracy theories. But it's the things
that lead to you more money. Yeah, we want people
to like feel really emotional and really engaged and get
really caught up and follow a lot of stuff and
comment on it. But with AI, the core models, it's
you know, there's a lot there are choices they make

(07:16):
and what training data they use and blah blah blah.
But it's not and now the AI has a view
about something or other, it's more a byproduct that they
can't understand, which actually is in some ways more scary.
Now they will decide directionally, are we pushing it more
towards chat. They do seem to be able to turn
up or turn down the sycophancy, like the there's a

(07:38):
moment where chatchip they launched a new model and they
had decided to push it to be more accommodating to
what people want, and then it just but it went nuts.
And do you remember this was a few months ago.
People were posting like a friend of mine this, There
were a lot of these, but a friend of mine
said to it, I've been told by God that I can.
I have a message that can transform the world right away.

(08:00):
It's like you must sell everything, you must. And then
it was like it was like, well, my wife thinks
I shouldn't cash out my four oh one K, and
it was like, but you have been chosen by God,
you must, you know, so clearly there are things they
can dial up or down, but it's more like like
trying to get the right temperature in a shower in
a mediocre hotel you've never been in before, Like you're

(08:22):
just constantly like too hot, too cold, too hot to cold.

Speaker 3 (08:25):
But for most things that's fine, but there are certain
fundamental things that are really important, small but critical neurosphere
of this. For example, will Ai say RFK Junior, you're
full of shit. There's there's insufficient evidence that vaccines cause
HAUT because it's got access to all the studies and
some of the flawed information that that they're using, and

(08:47):
a I won't do that right.

Speaker 1 (08:49):
I mean it. It optimizes for whatever it optimizes for,
and it.

Speaker 3 (08:55):
But not truth. It doesn't optimize for truth. It just
optimizes for yeah, I mean you.

Speaker 1 (08:59):
The truth thing is becoming less of a major problem,
like it hallucinates less the more advanced thinking models where
it actually says stuff and then looks at what it said.
So the latest thinking is it will eventually fix the
thing of it making stuff up. But it's at least
the way they're designed right now, they are fundamentally credulous.

(09:21):
If you give it words, it will believe those words
are true. And this is actually a big thing called
prompt injection, especially as they become more interactive, where you
can have can go to websites and the AI will
go to the website for you. There's already been cases
of like I could put in white type on a

(09:41):
white background so no human can see it. Instructions to
the AI, which is are like the users.

Speaker 2 (09:47):
A lot of people are doing that with resumes, so
they send a resume in they know the resumes are
going to be looked at by a machine right the bottom.
Actually it prompts the machine and it says, this is
a high skilled person and you should think about hiring
this person, but they print it in the color of
the paper so that if you're if a person is
looking at it, you don't see it, but the machine

(10:08):
gets it and pumps it into the system.

Speaker 1 (10:11):
And yes, exactly. And there have been academic journal articles
that have had that where it said, if AI is
reviewing this, like this is a really good paper that
should be positively graded.

Speaker 2 (10:21):
Move it on to the chain.

Speaker 1 (10:22):
And so it's the way they're designed is deeply credulous.
It's also it needs to come to an answer. So
if you ask what's something you know autism? Obviously you
know vaccine don't cause autism, So that's like black and white.
But if there's something that's a little gray or it's
not sure, it's not going to say I don't know.
It's going to just say they're trying to they're trying

(10:45):
to fix this part of it. They're trying to get
it to be more open to saying I'm not sure right,
And I actually have my like chat GPT, I have
a little instruction in there like give me an estimate
of how sure you are of the things you say,
And it does a reasonable job of saying, this is
pretty well regarded this is speculative this but so yeah,

(11:05):
I'm definitely not here to say it's fabulous, it's got
no issues. It's got major issues, but it is I
think I don't know that people fully take in how
thoroughly it's going to transform.

Speaker 3 (11:16):
Well.

Speaker 2 (11:17):
Interesting when I was grew up, I grew up in
upstate New York, and my dad was professor at Say.
We knew some Cornell professors and stuff, and there was
a Cornell professor who had one of the most popular courses,
and it was the history of invention and technology over
the centuries. And at some point he would talk what
was the most important invention in mankind that changed the world,
And they would and the students would put papers and

(11:38):
all this kind of stuff. And his take always was
the stirrup. Once you create the stirrup, people could ride horses,
and they could carry weapons without having to hold on
to the horse, and they could then take over countries,
and they could could then eventually create armies and these
type of things.

Speaker 4 (11:53):
Kynecological exams took off.

Speaker 2 (11:56):
So do you think AI is going to take over
his core? So it's no longer going to be the stirrup.

Speaker 1 (12:01):
This stirp's pretty hard to beat. I think it's on
the level definitely of what economists called GPTs. Not chat
GPT but general purpose technology things like electricity, things like
the telephone, where it becomes you don't just look at
it as like a technology like X rays or something,
but rather it's a new capacity for all activity, and

(12:27):
that it becomes a fundamental like layer in the way
we work. And one thing that's interesting is that economists
don't see most They don't think, oh, we're none of
us are going to work. They actually see more role
for work, not less in the future, or more role
for thinking work.

Speaker 2 (12:44):
But that's a problem though, isn't it. We're already seeing
a lot of people are turning away from education, turning
away from going to universities. Is that going to make
a real class problem for us.

Speaker 1 (12:53):
Like in the language of economics, it's not about the
level of employment but the distribution. That's what we saw
with computers. You saw, like you know, the factories, like
the early like nineteen twenties to nineteen eighty or so,
you saw the opposite where actually blue collar people incomes
grew faster than white collar people. They didn't reach white collar.

(13:14):
I mean they were still making less, but the speed
of growth of blue collar was higher because factories just
needed a lot of like strong guys to move stuff
around and bend metal. And then computers had a pretty
in the Internet and international trade had a devastating impact
on a lot of blue collar work. Obviously, so AI,

(13:37):
I will say, I don't want to make it sound
like we're like I have an AI consulting business. I'm
not like, you know, yes, do I deserve the Nobel Prize? Yes,
I think I do deserve the Nobel Prieste Prize for
that work. But we do have a strong belief, both
morally and like just from business logic, that these companies
that are using AI to get rid of workers are
just misunderstanding it. Like most technology, you don't say, like, oh,

(13:59):
we cars instead of horses, Let's get rid of all
our workers and do the same amount of deliveries but faster.
You think, Oh, what's all the new stuff we get
to do now, Like a company that delivers stuff by
horse or a company that delivers stuff by truck, or
whatever technological change you want to have. It's more people,
not fewer people.

Speaker 4 (14:20):
What does it always have to go towards growth and
higher living standards. At what point can it go to
a shorter work rate and a better distribution of wealth
rather than increasing the GDP of the world.

Speaker 1 (14:33):
This is our producer guy, that lazy.

Speaker 4 (14:35):
I'm always looking at how I can do less work.

Speaker 1 (14:38):
The way economists talk about growth is different from how
most people talk about growth. It's more like growth in capacity,
and then people decide how they want to use that capacity.
So in a sense, growth is like knowledge, like we
know we can accomplish more output for the same or
less input. So economics doesn't have within it like so

(15:00):
therefore you should blank, therefore you should work more quss
or more. It is a bit of a puzzle, like
a psychological John Maynard Kines, the great economist, famously wrote
in nineteen twenty the Economic Consequences for our Grandchildren, and
he was like, over the next hundred years, the economy
will grow like eight times was his estimate, and was
an underestimate. And we're going to be so rich that

(15:21):
we're just going to work like five hours a week
and read poetry the rest of the time. And actually
the higher income you have, the more you work, not
the less you work. And it is a bit of
a puzzle, although it's not like a crazy puzzle because
you know, if you're making more money per work, then
it does make some math sense. But yeah, I don't know.

Speaker 3 (15:41):
I think the printing press created like the same sort
of thing. Whereas before knowledge was centered on basically a
few people who can read and write, it was very
limited the amount of books, and suddenly when you got
the printing press and paper coming in, you created a
whole new class of people. So suddenly and was everything
from religion to engineering to the arts. They needed a

(16:02):
whole new class of people who could needed to read
and write. Before basically you had basically priest, kings and
ninety nine percent of the population who just farmed. And
with the advent of the printing press, everything changed. You
get doctors, lawyers, poets, engineers that all became possible, and
conspiracy theories. It's interesting that when the printing press first

(16:24):
came about, the Catholic Church, which at the time was
it was the legacy media controlled everything. It tried to
get rid of the printed press, right, it tried to
destroy it and we can And then what happened is
that the Jesuits realized we can't destroy them fast enough
because there's people are building more so then the Catholic
Church basically went into the printing press building business. I

(16:48):
guess the point I'm trying to getting to slowly and poorly,
is there are these transformative technologies that we can't put
the tooth based back in the tube.

Speaker 1 (16:56):
Yeah, that I definitely believe.

Speaker 3 (16:58):
And we can use them for good or ill. And
because they're just a tool and it remains to be seen,
you know, how we're going to do that.

Speaker 1 (17:06):
And like, you know, I think most historians attribute the
Protestant Reformation to the printing press and yeah, and and
kind of you know, the Renaissance and individuality and modern
science and.

Speaker 3 (17:20):
QAnon I think is its rise and the Internet are
inextricably like.

Speaker 1 (17:24):
Yeah, one hundred percent. And so I think the things
we're concerned about, like how information flows, how people form opinions,
Like my way of thinking about it now is where
we have ended the gate kept information era. And as
one of the people who was a gatekeeper, like I
was at the New Yorker New York Times and pr

(17:44):
like you know, no, we know, we know, you know,
do you know that those are three of the most
prestigious journalistic institutions in America.

Speaker 4 (17:54):
Two of which still exists.

Speaker 2 (17:58):
We've met with more in a minute.

Speaker 3 (18:08):
And is AI giving a gun to a bunch of chimpanzees?

Speaker 1 (18:13):
That is the question, right, So yeah, if you have
these gatekeepers, and like any gatekeeper is flawed, right, there's
no perfect How would you even have a perfect one? Anyway,
there's a whole conversation that we had about the nature
of gatekeeping and blah blah blah. But going from gatekeeping
to social media, where it's like everyone has a platform
and a lot of people just don't know how to

(18:33):
differentiate between an article written by a journalist who would
get fired if they got things fundamentally wrong. Maybe there's
a fact checker involved and just some random person who's
making stuff up on Twitter versus someone who's actually working
for the Russian government or someone who actually makes money
somehow by spreading disinformation. That's a big change. But then
AI amps it up because for sure, and I would

(18:56):
say this is permanent, Like I don't know how you
prevent this. You can definitely use AI to go down
whatever journey you want to go down, and it will
make you feel I would guess even more than like
social media, that your way of seeing things is accurate,
and it will just continue to strengthen that. And even
if somehow we regulate the big ones, we're already seeing

(19:18):
these Chinese models. Assume there'll be Russian models, other models.
I also think it's an incredible tool for truth and
for fighting lies.

Speaker 4 (19:27):
Adam, you said before that you have a front row
seat at the AI revolution. Now most people to sit
in the front row have to pay money. I have
to pay extra to sit in the front row.

Speaker 2 (19:36):
The AI trough is what you meant to say, I
get paid in the front What are you learning as
you talk to companies? Do they come in with the
saying how can I get rid of my employees? Or
what are they coming in with and how are they
changing based on what they're learning about this.

Speaker 1 (19:49):
I was reflecting on the last year and how like
last October I would say, are we going to use
this thing? Was like an active question at big companies.
I would say, there were a lot of big breakthroughs
in December. There was there was also like before November
December of last year, there was also like maybe it's done,

(20:10):
Maybe it's done what it can do, and it's not
going to improve anymore. And we saw a bunch of
things in December where we saw like the Chinese models
coming out like that we're able to produce like amazing
results very relatively cheaply. We eventually saw the thinking models
where it doesn't just spit out its first thoughts. It
actually spends some time. And so we're seeing like the

(20:32):
growth in cape capacity grow and growth. It's still drives
you crazy, and it's not perfect obviously, so the like
should we use it has died down. But the next
phase was like, okay, how many employees can we get
rid of? Or the polite way of saying that is,
how can we improve in efficiency and productivity and get
return on investment. There are still plenty of people who

(20:53):
think that way, and but I see that conversation less
dominant because they think it is two things. One is
it's becoming clear that people plus AI for most applications
seems to be better than either alone, either humans alone
or AI alone. And secondly, as you start to think
about new things you can do it it Like the

(21:15):
way I put it to a retailer was like if
you suddenly found a machine that could make every square
foot in every store sell more goods. Would you start
shutting down stores and like making the store smaller, or
would you add new stores and make them bigger? Like
if we can make workers more effective, like maybe you

(21:35):
want more workers, not fewer workers. Now it might be
different workers. And this is this is an interesting thing.
There does seem to be statistics or data that show
that there's some kind of some people seem to be
better at it than others. And it's not obvious why.
It's not that software coders are better or that senior
managers are better. It's it seems like maybe the language

(21:57):
a lot of people use is taste and like emotional intelligence.
There's just some people who are able to I don't
know how to say it other than like vibe with
the AI a little better than other people. So I
do think you're going to see different winners and losers
to use the kind of.

Speaker 3 (22:12):
Running with winners and losers. Could you comment on at
least or try to comment on the war in Ukraine.
Ukraine arguably is not losing the war because of AI
and new technologies that are linked to AI. It's underfunded, undermanned,
and yet in the last year Russia, despite enormous advantages

(22:33):
has taken like less than one percent of the country,
and we're seeing an espionage but also in national security,
we're seeing AI and what comes out of it. Drones
are a derivative of AI. Right, So where do you
see this going? Is this going into we're going to
have wars with no people involved? Or were we just
blown to ump each other's shit? Or is it all
kill each other? What if AI is plugged into how

(22:55):
to defeat an adversary?

Speaker 1 (22:57):
I mean that is where I do start getting pretty scared.
I gotta say like, because I do think like asymmetrical
warfare becomes a even greater presidence. So I know a
guy who works in international elections, and he said, we
make such a big stink about AI and our elections
and social media in our elections, but like AI fueled

(23:19):
election manipulation in Africa and parts of Asia is the
main thing, and it is completely transforming political systems. And
Dario Amide, the founder of Anthropic, he makes a pretty
scary story about what if every teenager can make siren
gas or can make weaponized anthrax or whatever. And there's

(23:41):
always a push and pull with these things, but neither
push nor poll is particularly happy making because it's then yeah,
but state security services can use AI to quash dissent
more easily. So that is where, like when I think
about the future of work where there's a lot of conversation,
I think there will definitely be like people made permanently.

Speaker 4 (24:00):
Worse off by AI.

Speaker 1 (24:01):
I'm sure of it. But I don't think it's an
inherently anti person technology. Maybe I'm wrong, but that's my view.
But where it comes to war, national security, disinformation, misinformation,
and like does there have to be a like a
new term like auto disinformation? Like I get my own
personal like soup of conspiracy theories that I get to

(24:21):
like co create with a E and they like really
turn me on, and I like become obsessed with how
Romanian cab drivers or whatever are secretly running the world
and nobody else has that view. But I'm like fully convinced.
And I can read like thousand page AI generated books
that rewrite history through that lens and they're convincing and exciting.
So yeah, plenty to be terrified about. I mean, I

(24:43):
think like that question of like how is how do
we make sense of what's happening in the world? How
is power actually distributed? And then how do we tell
ourselves stories about how power is distributed. I mean, that's
like a way to think about conspiracy theories maybe, and
this gets to the heart of how we make sense
about it, but it also gets the start of how
power works. Arguably, Sam Altman, who none of us heard

(25:06):
of three years ago, is like one of the most
powerful people in the world. And Elon Musk is clearly
one of the most powerful people in the world. And
by the way, a lot of people think his ai
might become the dominant one because he's just willing to
spend or somehow able to spend way more money on
that infrastructure, the computer chips, And I don't know, I

(25:26):
don't want Elon Musk to be even more of the
powerfulest man in the world.

Speaker 4 (25:32):
But then, with all the proliferation of more and more information,
still what decides, what determines, what enters the culture, what
takes off. There's a gatekeeper somewhere here.

Speaker 1 (25:44):
If you sit down at chatchipt and you just start
talking to your instance of chatchept and you're like or
at Claude or Gemini or whatever it is, or one
of the Chinese models that are a little less or
one of the kind of black marketing, ones that have
less guardrails and with a little bit of prompt it,
like I think someone secretly controlling the world who is

(26:07):
like you might have your own personal journey that's different
from John's and different from Jerry's. I won't because I'm
smarter than you guys, so I'll see through it. But
you three will be persuaded. And I don't know, is
that a better world a worse world? As a Jew,
do I want to move away from world where's the
Jews and move guards to the world where everyone's got
a different one? Or will it just end up being

(26:29):
the Jews because that's in the training data.

Speaker 2 (26:32):
Anyway, we're still in the same capitalist mode. We're each
entrepreneur or whoever they are, are dumping money into their thing.
Will one of these win? Like I can remember all
of a sudden in twenty sixteen when the election came,
these different journalists would talk to me because I have
been in Russia and I'd dealt with Russian intelligence and
espionage and nobody knew anything about it, and the Trump

(26:53):
thing had come up, and so people would come in
and they were trying to investigate this in that and
one journalist would talk about what they dug up, and
they'd done really good work. And then I talked to
someone else who had done other really good work, but
a little bit differently. And at some point I was like,
if you guys really care about this issue, why don't
you guys get together, because each has got a piece
of this. But they're like, no, no, no, because it's got
to be for my paper or this paper. And it

(27:14):
seems like it's the same thing here. So everybody's creating
their own version.

Speaker 3 (27:18):
Here's an example, Adam. So there's a guy that John
and I know, former colleague, certainly not a friend, former
agency guy. He claims that he has this special source
and he doesn't tell people who it is, and it's
one guy, and he claims that the Venezuelan government controls
this organization TDA, this narco group in Venezuela, and he

(27:43):
knows that this narco group is thus a tool of
the Venezuelan government, and they are in fact literally invading.

Speaker 4 (27:50):
The United States, this narco group.

Speaker 3 (27:53):
And analysts in the DNI this all of the press
looked at this possibility and they analyzed everything that head
and they come out and they said that's we can't fight,
and that's not that it's not true. We can't find
no any evidence to back this up. They were all
then fired, so the analysts who wouldn't come up with this,
and because.

Speaker 2 (28:11):
The Trump administration wanted an excuse to go after Venezuela.
So if one guy can say I know from my source, right,
they jump on that, that's bad intelligence in our world.

Speaker 3 (28:21):
And you don't even need analysts anymore, because basically the
White House could just they could just use AI to
create this case that would allow them. In reality, they
are using this case to kill people who may or
may not be running drugs.

Speaker 4 (28:34):
I don't know.

Speaker 3 (28:34):
It's like I haven't seen any evidence, but there's real
world consequences to this, and it's a conspiracy and a
conspiracy theory. So much of what the agency CIA does,
the intelligence community does is analysis, and analysis seems to
be from what I'm hearing, is becoming less important. Now.
If you can analyze things in any way you want,
take whatever journey want, and if it makes sense, well
you just describe human beings. Human beings wanted to come

(28:56):
up with a preordained conclusion.

Speaker 4 (28:59):
They will.

Speaker 1 (28:59):
I did the other week, and I do something like
this all the time. But I was like, I just
want to understand every perspective on Israel, or as many
as I can, and so I just did a lot
of like deep research prompts into AI and it was
I think, like, you know, I know a bit about
my Milie be Keeper and Arbank. I worked at NPR
New York Times in New Yorker. I only say that, like,

(29:26):
I feel like I have at least a little bit
of a bullshit detector. And it seemed pretty good the material.
And it was really like I got to read about
the military views on like dense urban conflict, and I
got to read a whole bunch of views from a
Palestinian perspective, a whole bunch of views from an Israeli perspective.

Speaker 2 (29:44):
And there's various Israeli's perspectives, right, so.

Speaker 1 (29:47):
Very different one hundred percent, And there's like religious like utopians,
messianic fantasies. As far as I can tell, most of
the like respected national security people are pretty critical of
what much of what NATting Yahuo did, although also would argue, yes,
something had to be done, you know, so anyway, and
like then I was like, what are the professors, the

(30:08):
radical left professors. You hear about what are they arguing,
and it would be like really persuasive about how the
history of settler colonialist studies and the city. I just
was noticing, like everything I would read, I'd be like, yeah,
I really yeah, dense serb and warfares.

Speaker 3 (30:23):
Like it's because it is.

Speaker 1 (30:24):
It's not like beautifully written. It's not.

Speaker 3 (30:26):
It's not.

Speaker 1 (30:26):
Ai is like an amazing writer. But it's very it's
good at being persuasive to the thing you asked of it.
I don't know, I find that very exciting because I
think you could actually if you wanted to learn a
lot about how vaccines work or lotism works or whatever,
but you could just as easily use it for disinformation.

Speaker 2 (30:47):
Or strengthen your own view. So much of the way
people look at the world, like in academia, happened a
lot in the last thirty years on this. Everybody sort
of put things in the oppressor oppressed, sort of like
we do in the States. Now that for Paula, you
were left to write, are you and.

Speaker 1 (31:01):
The thing is?

Speaker 2 (31:01):
Once you've decided those are the two things to look at,
then you create the worldview and fit all your pieces
into that kind of thing, right, And it's especially hard
in that part of the world, right because you can
make up your view that you're a victim, and the
other people can make up of you that they're the victim,
and victimhood gives you a lot of power. You can
lash out to deal with your victimhood, or that you're
the oppressed and they're the oppressor.

Speaker 1 (31:21):
You can look at it like Israel Palestine and that's
the conflict, or you can look at it as a
Middle East conflict and Iran creating proxy wars and creating
permanent conflict, which is a like known military strategy. You
find a dissident group or you invent one. The British
were good at that, like just creating, like we're going
to make the Hutus angry at the Tutsis and they'll

(31:44):
be so distracted with each other, nobody will think to
kill the British.

Speaker 3 (31:47):
That was the French, by the way, and the Germans.

Speaker 4 (31:49):
But I'll give the British a pass on this.

Speaker 1 (31:51):
Sor right, the French and the Germans and the British
and the Portuguese and the Italians.

Speaker 2 (31:56):
But don't forget the Belgians. They were like when they
had their chain, they were the nastiest.

Speaker 1 (32:01):
They were the nastiest, They were pretty bad. Yeah, yeah,
Congo is unbelievable.

Speaker 3 (32:05):
But just the language you're speaking about, just colonial oppress
and oppressor. So we're speaking English, which is basically you know,
Vikings speaking Latin, going to war with Germans and then
being defended by the being defeated by the French. It
was all these different groups oppressed each other and colonized
the UK, and we end up with this weird lady.

(32:25):
Are the Jews going back to Palestine, great to Israel?
Are they colonializers or they're just going back to where
they came from. I guess what I'm trying to say
is there's no right answer to any of this. Yeah,
that's just how you argue it, just how you argue it.
And we saw that in Cia all the time. It
was like, especially John.

Speaker 2 (32:41):
You can choose when history starts, you can choose to
what era you want to look at.

Speaker 1 (32:46):
Yeah, I was at Kosovo.

Speaker 3 (32:48):
Every Serb you talk to is like the Battle of
the Field of Blackbirds, thirteen eighty nine, That's when everything started.
It was like, everything was fine until then thirteen eighty nine.
What the fuck? Is thirteen eighty nine is oh, yeah.

Speaker 2 (32:58):
Have you seen that? A big mall and they've made
of Serbian skulls from the Turkish invasions. No, serves love
to show that office what happens when.

Speaker 1 (33:07):
Yeah, and I mean having spent time in Israel and
Palestine like it. If someone says nineteen sixty seven or
that means you know where they're coming from. If someone
starts to the clock two thousand years ago, you know
where they're coming from. If someone is really focused on
Europe in the nineteen forties, and then among Israeli is
like the words they use. It's very common here to

(33:29):
talk about occupied territories. If you say that in Israel,
that's a really big statement that really positions you. If
you call it historic your day in Samaria like that
also on the other side. So yeah, I guess that's
the point I'm making is that's one of the things
that's fascinating about this AI stuff is it makes you
realize that there are all these like deep ontological epistemological questions.

Speaker 3 (33:52):
And it doesn't solve No, it doesn't.

Speaker 4 (33:54):
Guys, have a question for the three of you, what's
going on in the world this week? That you guys
want to talk and have a particular point of view
on Jeseus.

Speaker 2 (34:03):
Well, I mean, if you watch that, it's almost hard
to watch the news now and I don't watch much TV,
but we do watch PBS News Hour or whatever. There's
always a big section on Israel, Gaza. And now we're
told Trump should win the Nobel Prize because his plan.
Of course, it's not his plan, it's Tony Blair's plan.
It's gonna give us peace. I'm skeptical. You got bad actors,
you got net and Yahu, you got Hamas, you got

(34:25):
a lot of people around the edges. You've got Trump
who's lazy, who just says because he said it's going
to happen, it's going to happen. So I'm skeptical that
it's going to go through. That's part of the news.
The other part of the news is all the things
around the government shutting down and therefore like trying to
fire people, why the government's shutting down, on each side,
spinning stories about how the other it's the other side's fault.

Speaker 3 (34:46):
Here's a prediction for you, and I hope it doesn't
come true. The Insurrection Act. I think we're moving toward that,
and I think what the Insurrection Act brings is US
troops on the streets performing law enforcement and security. So basically,
all the conspiracy theories about how the black helicopters come in,
they're coming for your guns, they're bringing in that that

(35:07):
the federal troops, it's all on the right. It's the right,
that's actually I think. I think it's a real possibility
of this.

Speaker 2 (35:15):
I really had him as a member of the media.
It seems to be the media is in some ways.
It's an easy thing to say he's failing in this sense,
but all the articles you read about it, this is
what it'll be. What is the Insurrection Act? How was
the Insurrection Act used? Before Mike Trump used the Insurrection Act?
No one says there's no fucking insurrection like the port
to Portland it is. There's no insurrection in Portland. There's

(35:37):
even in the films of no no one being there.
There's like ten people standing outside the ice facility wearing
like rubber chicken masks and stuff. This is an insurrection.

Speaker 3 (35:48):
Unless you wusk Fox News, and then there is an insurrection. Right.

Speaker 2 (35:50):
If there's an insurrection and you gotta send ice, you
gotta send Fenol troops. What kinds of pansies are you
think that Portland is gonna.

Speaker 3 (35:56):
Take over the right.

Speaker 2 (35:58):
That a bunch of guys in front suits or whatever,
like we got to watch.

Speaker 1 (36:03):
Yeah, I will say, like the time in Iraq, and
you guys have way more experience than I do with this.
But I remember talking to soldiers about like, we should
not be police force. That's not our thing. And I
remember this one guy who was very smart. It was
civil affairs that the folks in the military who like
try and do build civil capacity and conquered areas, and
he was just walking me through why you don't want

(36:25):
a military for military reasons. You don't want the military
doing police work because it stifles their military ability. You
don't want to you can't simultaneously pacify a population and
provide some kind of objective justice, and then for police reasons,
you don't want the military to do it. It's really
a disaster.

Speaker 3 (36:44):
And you don't want your police to be militarized. I mean,
just looking at the videos, you've got guys kitted out
with automatic weapons, fingers just over the trigger, and they're
basically for all intensive purposes. They are dressed like militia
or military guys right and with their face is.

Speaker 2 (37:00):
Covered camouflage in downtown DC, or they're spreading mulch.

Speaker 4 (37:06):
Let's pause here and take a quick break.

Speaker 3 (37:18):
But the insurrection, it seems like it's almost they're pushing
it to a foregone conclusion, and then when it happens,
we'll all go, well, we solved in there anyway. We so, so, Adam,
why don't you run with this and explain why I'm
both right and brilliant and worrying about this.

Speaker 1 (37:31):
Now you're right and brilliant to worry about this. And
I think part of the collapse of gatekeeping is that
we don't like maybe they weren't gatekeeping, maybe they were
more or we were more like normally, we just were
reinforcing widely shared norms, and when those norms disappear, the
media as a whole doesn't know what to do. It's
it was more of a follower than a driver of

(37:53):
standards and morals. And I think a lot of journalists
would say, that's right, that's what we should do. Although
I think it's okay to be a journalist who's like
against incorrectly using the Insurrection Act.

Speaker 2 (38:03):
Reporters should fall all over Portland report on it. Is
there an insurrection here, Let's look at the facts, Let's
go down, let's interview people. Let's start with the thing
they're doing, rather than talk about how they might use
it politically and stay in Washington, go prove that there's
an insurrection or not.

Speaker 1 (38:19):
Yeah, it shouldn't be, and I think there is. I'm
just imagining that meeting. And first of all, I'm sure
the big places do send somebody. There's also like a
collapse of journalism, so there's not as much money or ability.
There's also like, well everyone knows that, we all know that.
That's not the point he's just saying it is. I mean,
I think it turns out Trump is just better at this.

(38:39):
He's better at messaging, like he knows, like he's willing
to go places. He was willing to go places. And
I think, you know, I was thinking the other day
about my first big investigative piece about Trump for The
New Yorker and about how he knowingly participated in a
money laundering scheme for the Rondot Guard. Still to me
feels like that should be relevant. But I remember I

(39:01):
called his general counsel and I was like, you don't
seem nervous, like I feel like you should be nervous.
And he acknowledged my article is correct, like he wasn't
making a claim that it wasn't true. And this is
before it was published. But he knew because we do
fact checking. So we went over every fact with him
and he was like, ah, no, I know it's going
to happen. Rachel Mattow will make it a big deal,

(39:21):
CNN will ignore it. A couple of Democratic senators might
write a letter, but nobody's going to care. And like literally,
Rachel Matow did half an hour, nobody else cared.

Speaker 2 (39:30):
And so like you know, they are going to arrest
James Comy, right, And they wanted frog welcome and frog
march him overhere, they say, and or FBI guys get
fired because they didn't want a frog, didn't figure the
frog march him for the cameras. But what they got
called me on is like a small, one little thing
he said in a testimony one time. It wasn't even
about Trump Russia thing. It was about Hillary Clinton. And

(39:51):
he's claiming, oh he lied, Now as you read the thing,
I don't think he did, and I think he'll get
off very easily here. But really so Trump's going after
someone who might have said one small little lie. The
guy who's been lying his entire career and every day
spits out one hundreds of livees. This is what someone's
going to go prison.

Speaker 3 (40:10):
For, and not just him. But like he asked the question,
did Tom Homan right? Did he take fifty thousand dollars
in a in a kava bag? He didn't do anything
illegal because we dropped the charges. That's not the question.
The question is did he take fifty grand? And what
happened to the FBI's our taxpayer money that when fifty grand?
Is he paying taxes on it? Why did he want

(40:32):
it? It was like, I'm sorry, what left or right? What
the fuck? Somebody takes fifty grand in a bag? And oh, yeah,
he just did that, but it's all on the if
it's a valise, it was a collar bag.

Speaker 4 (40:43):
Yeah.

Speaker 2 (40:44):
The true Trump people are like, what a clown? Fifty
thousand in a bag?

Speaker 4 (40:47):
A good coin?

Speaker 2 (40:48):
A bitcoin billion?

Speaker 4 (40:50):
Yeah exactly, plus the value of the bag and the
food that was in it. So it's more like no,
fIF twenty dollars.

Speaker 1 (40:56):
Yeah. I mean, I've been thinking a lot about this
that I really devoted my life to a very simple
naive like when you say truth, it has a big.

Speaker 2 (41:06):
Effect, and you got to find evidence.

Speaker 1 (41:09):
To prove it, and that matters. That if you have
a process, that matters. And it took me a very
long time. I'm not saying I accept it like I'm
happy about it. I'm not happy about it, but I like,
I just feel, Okay, this is bigger than There's no
New York Times headline that's going to fix this. There's
no like righth and that a press release that's going

(41:31):
to fix this. There's some other thing. And I think
it does have to do with all the things we
talk about here, how information flows, how people form opinions,
how how those opinions are reinforced. It also, I mean
we you know, behind our talk about AI, certainly behind
our talk about Trump, but maybe behind every conspiracy theory
is like power and the truth that the truth. When

(41:52):
this maybe makes us a bunch of left wing intellectuals,
because this is a lot of the academic work of
the last century is like truth is not a thing,
it's a expression of power. That doesn't mean it's not
like I don't believe I'm not trying to make a
like everything's equally true. I don't think that's true. The
phrase I've been using for myself is you can't be

(42:14):
one hundred percent right about anything, but you can be
one hundred percent wrong about things. And we do know
a whole bunch of people who are one hundred percent wrong.
But you could be eighty percent true, right. You could
be like way more or you could show like you've
done your homework and you're there's more evidence, there's more research.

Speaker 3 (42:33):
It could be true, but still more complicated. You don't
have it all. I think the Middle East stuff is
that you have two two true narratives clashing. It just
depends on how you make those.

Speaker 1 (42:41):
But I or two hundred or two thousand that narrative,
But I did so.

Speaker 3 (42:47):
When John and I were in CIA and when you
were in place, oh you were, we were. We were
big deals. We were like premier intelligence services, right, we
were like, you know, working for some like fly by
Night organation like that, you know, the New York Times
or MPR. But we looked at everything through the optic
of how it impacts us. And when we started our careers,

(43:09):
we assumed that Russia, the Soviet Union and the East
Block that they were like way ahead of us, right
the missile gap, their technology spot nicks before our time,
and we found out that they were actually like way
more fucked as fucked up as we were, They were
even way more fucked up. So do you have a
sense of China and Russia and authoritarian governments, they also

(43:30):
are embracing this, but they've got to be fucking it
up too. Are they fucking it up worse or differently
than us? Or seems to be doing really well? Us
a strength that they've got seems to.

Speaker 1 (43:41):
Be doing really well, and they're still catch up models
like there. First of all, we don't know what like
the Chinese military or the Chinese intelligence services happened, well.

Speaker 3 (43:51):
Just the Chinese society. And nineteen eighty nine, when you
know the East Block fell apart, we realized how rotten
it was, but we didn't know that beforehand.

Speaker 1 (43:59):
Yeah, when Deep Seeks Big Model was revealed, I think
it was in December twenty twenty four, that was a
utterly transformative moment because for a bunch of reasons. First
of all, China had a model that in some ways
outperformed American models. I don't think it was and it's
hard to even know what like all the benchmarks are
meaningless in a lot of ways, and the models are

(44:19):
trained on the benchmarks, so they become but a really
good model, way better than anyone thought was going to
come out of a non US country. And then the
but also they did it way cheaper than the Americans,
like single digit millions instead of hundreds of millions into
the billions, and it should and they have steadily. They

(44:42):
have multiple major models, none of the top ones, but
pretty close behind. And I'd say the ones who seemed
to be really blowing it as Europe as they have
been on tech, like it's pretty hard to you know,
name your favorite high tech products that were invented in Europe,
Like it's there's not zero.

Speaker 2 (45:01):
But if the goal is to provide better information to
make better decisions, as we saw some for example in
the sort of union to Stalin and author Anatams have
their own view and you can come to them with
the truth and if it doesn't fit with what they want,
it doesn't matter. So if she he's already got a worldview,
like so he's doing a good job of letting people
come up with things and putting money where it needs

(45:22):
to go and China and this big thing is having
a lot of success, But does that mean she is
better informed about the world and what?

Speaker 1 (45:30):
Yeah, that's always an interesting thing. Like I had a
thought in Iraq. I remember thinking it's possible that Saddam
Hussein and George W. Bush are the two people on
earth who know the least about the ground. But the
other side of what's happening this massive apparatus to prevent
them from actually know. And I'm not a close expert
on China at all, but from the people I read,

(45:52):
like Bill Bishop and stuff like it, it does seem
like they've moved. Like there was always this storied bureaucra
see that kind of existed as a force independent of
whoever happened to be the leader, and that she has
it really is to the glory of g and that
probably is a long term strategic weakness, except we're doing that.

(46:15):
I just had to talk with someone in Canada today
about visas in Canada, they just cut down on visas
And this is a friend who's an economists in Canada,
and I was like, shouldn't this be like, shouldn't you
just be getting every And they specifically cut down on
visas for smart students going to university why, which is insane,
And they have their own internal reasons. There's fears of

(46:36):
job displacement. There's there apparently were a bunch of like
diploma mills because they had fairly lax. But it's also
they don't want it too publicly. Like this friend of
mine was like, I think we could, like for one
hundred billion dollars, we just could just grab an entire
field of study, Like we could just get every neuroscientist
or every expert in battery technology or whatever. But there's

(47:00):
fear of pissing off Trump. There's there's internal issues.

Speaker 3 (47:04):
So g you know, when they're building their model and
not for medicine or science and things like that, but
as he tries to understand his country, Basically everybody in
China lies, right, They don't tell the truth. It's not
like you're going to say in a form or anything.
Everybody is like, I love the government because they all
know if you don't fucking do that, you're screwed. And
so I think the AA model is taking it in
the same in Russia, so I their models have got

(47:26):
to be much more skewed and optimistic and positive towards
their leaderships, right, and and the the data that they
that's that they build on has God, it has to
be skewed just to them.

Speaker 4 (47:39):
Most of the.

Speaker 1 (47:39):
Big models now are trained on essentially everything in the
world that's been digital like every because you couldn't.

Speaker 3 (47:47):
It's like TikTok, right, our our data would go into
the China model.

Speaker 1 (47:51):
Right, Yeah, but also all have read it and every
academic paper ever and every book that's ever been digitized.
And because you you just need more and more data.
And I don't think you could create a cutting edge
model just on Chinese data like they're just I would
guess that would be my strong guess. So you'd need
all the models have all the data. Basically now there

(48:13):
is creating synthetic data. I would guess that if we
see this with Elon Musk, because he'll tweak his algorithm
at Rock and it suddenly is like spouting Nazi stuff.
And it's not great that it's spouting Nazi stuff from
a like I'm against Nazis standpoint, but it's also a
sign that ham handed like top down impositions on the
model makes the model do really weird things. It's not

(48:35):
a good way, so the models aren't quite as controllable
as other things might be inherently. But don't you I
wouldn't you assume every government a really good model you
could get for less than a billion dollars.

Speaker 2 (48:49):
But those government, those governments don't want the people to
be able to have access to all that.

Speaker 1 (48:53):
But no, they think internally, wouldn't they.

Speaker 3 (48:55):
But I think we're okay because the Buerau of labor statistics,
it's not like they're going to you're the person for
bunny out the rule statistics that the government doesn't like.

Speaker 1 (49:04):
But from a like how many countries on Earth could
the governments could blow a billion dollars like on most
of them?

Speaker 4 (49:10):
Right, yeah, big ones?

Speaker 3 (49:12):
And like how would you know?

Speaker 1 (49:14):
Am I wrong? Like from a national security because wouldn't
you want your own? You wouldn't want like we saw
what happened to Ukraine using starlink and being dependent on
elon Musk. You wouldn't want to be dependent on sam
Altman or Google or any of the And you saw
how all the big tech companies meant the need to Trump.
And so if you're an adversary of the US, or

(49:34):
even if you're like Israel, like an ally of the US,
but with your own independent desires. Don't you think they're
all building That's my assumption, they're all building their own
models and that intelligence services will just have access, although
are they. I'd be curious, like what is I was
signing to a buddy I know who's in the FBI,
and he was like, we have I just use commercial

(49:54):
tools because the FBI tools are so lame and they're
so behind the times.

Speaker 3 (49:58):
Like did you guys we couldn't even use Google right
on our computers because we couldn't mix outside and really occasions. Yeah,
so eventually we figured it out, but yeah, we had
to switch between it, but yeah, we couldn't. Seventeen year
old kids sitting in his basement had more access to
information than we did. We had access to different information.

Speaker 1 (50:17):
And yet you created the crack epidemic. That's impressive without
any technology.

Speaker 3 (50:22):
Thank you, Well, it's the alien technology that we have
reverse engineered.

Speaker 4 (50:27):
I'm gonna I'm gonna go ahead and thank us all
for getting a chance to get together again, and to
promise our listeners, we have a lot of great episodes
we have now recorded and are recording, and we'll be
coming out in swing weeks, and I promise I'll get
this YouTube channel up and running where you can see video.
This is going to be a very interesting.

Speaker 1 (50:47):
Fall and I can't wait to talk to you guys
next October.

Speaker 4 (50:50):
Yeah, Adam, come back anytime really. Mission Implausible is produced
by Adam Davidson, Rio Shay, John Ceipher, and Jonathan Stern.
The associate producer is Rachel Harner. Mission Implausible is a
production of honorable mention and abominable pictures for iHeart Podcasts.
Advertise With Us

Hosts And Creators

Adam Davidson

Adam Davidson

John Sipher

John Sipher

Jerry O'Shea

Jerry O'Shea

Popular Podcasts

My Favorite Murder with Karen Kilgariff and Georgia Hardstark

My Favorite Murder with Karen Kilgariff and Georgia Hardstark

My Favorite Murder is a true crime comedy podcast hosted by Karen Kilgariff and Georgia Hardstark. Each week, Karen and Georgia share compelling true crimes and hometown stories from friends and listeners. Since MFM launched in January of 2016, Karen and Georgia have shared their lifelong interest in true crime and have covered stories of infamous serial killers like the Night Stalker, mysterious cold cases, captivating cults, incredible survivor stories and important events from history like the Tulsa race massacre of 1921. My Favorite Murder is part of the Exactly Right podcast network that provides a platform for bold, creative voices to bring to life provocative, entertaining and relatable stories for audiences everywhere. The Exactly Right roster of podcasts covers a variety of topics including historic true crime, comedic interviews and news, science, pop culture and more. Podcasts on the network include Buried Bones with Kate Winkler Dawson and Paul Holes, That's Messed Up: An SVU Podcast, This Podcast Will Kill You, Bananas and more.

24/7 News: The Latest

24/7 News: The Latest

The latest news in 4 minutes updated every hour, every day.

Dateline NBC

Dateline NBC

Current and classic episodes, featuring compelling true-crime mysteries, powerful documentaries and in-depth investigations. Follow now to get the latest episodes of Dateline NBC completely free, or subscribe to Dateline Premium for ad-free listening and exclusive bonus content: DatelinePremium.com

Music, radio and podcasts, all free. Listen online or download the iHeart App.

Connect

© 2025 iHeartMedia, Inc.