All Episodes

October 12, 2025 51 mins

Adam is back and is now a bona fide expert on Artificial Intelligence. When it comes to conspiracy theories, it’s both our nemesis and our ally. Plus, a prediction for the upcoming misuse of The Insurrection Act.

Watch Mission Implausible on YouTube: https://www.youtube.com/@MissionImplausiblePod

Mark as Played
Transcript

Episode Transcript

Available transcripts are automatically generated. Complete accuracy is not guaranteed.
Speaker 1 (00:09):
Mission Implausible is now something you can watch. Just go
to YouTube and search Mission Implausible podcasts, or click on
the link to our channel. In our show notes, I'm
John Cipher.

Speaker 2 (00:21):
And I'm Jerry O'Shea.

Speaker 3 (00:23):
We have over sixty years of experience as clandestine officers
in the CIA, serving in high risk areas all around
the world, and.

Speaker 4 (00:30):
Part of our job was creating conspiracies to deceive our adversaries.

Speaker 3 (00:34):
Now we're going to use that experience to investigate the
conspiracy theories everyone's talking about as well as some of
you may not have heard.

Speaker 2 (00:41):
Could they be true or are we being manipulated?

Speaker 3 (00:43):
We'll find out now on Mission Implausible.

Speaker 2 (00:50):
Welcome back, Adam, John, and Jerry.

Speaker 1 (00:52):
I haven't seen any of you for a long time,
but we're now entering a new season and there's been
a lot going on since.

Speaker 2 (00:59):
The last time we all did an episode of a
month and a half ago. I hate to say I
missed a dam Yeah, I have missed you guys. I mean,
I guess I can just say I We normally make
fun of ours each other, but this is kind of
obnoxious because it'll make it hard for you to make
fun of me. We were dealing with that big health
issue in my family for much of this year. But
we're on the other side of it, and my wife

(01:20):
had to have a lot of surgery and stuff. But
she's now like really recovered and we're feeling wonderfully.

Speaker 3 (01:26):
And we've had her on the show, she's interviewed us.
We're so glad she's doing better. But did you make
more money this year while you were dealing with a
health crisis than you had him in the past.

Speaker 2 (01:35):
It seems like.

Speaker 3 (01:36):
You're all over LinkedIn and you're doing this AI and
you're speaking to groups.

Speaker 2 (01:39):
You're always jumping on the new fan at him. I am,
so I don't think I fully understood what I was
getting into. So after I did the Freakonomics series about AI,
I became pretty close with Ethan Mollik, who's become this
like great AI mind, and we decided to start an
AI consulting business with a couple other people, his wife, Leelach,
the wonderful Jessica Johnston. So yeah, we worked with a

(02:02):
bunch of like big companies on AI. This might shock you.
I'm not the world's most technical expert AI. It actually
it's interesting. I don't think I realized this until fairly recently.
But Planet Money, which I created back in two thousand
and eight with Alex Bloomberg, was the first major American
media company podcast. As far as we can tell, we

(02:22):
haven't been able to find another one. And I was
really on the front lines of digital disruption, like I
was at NPR deal like trying to figure out how
podcasting would change things for several years. Then I went
to the New York Times, where I was mostly a reporter,
but I would report up to the senior people and
talk to them about digital disruption and media. And then

(02:44):
I was at the New Yorker did the same. Then
I ran this podcast company with Sony, and throughout that
I became really fascinated by how like technological change impacts
society and impacts companies. I mean, you can talk about
whatever railroads and electricity and the telephone and the internet
and mobile and a million other innovations, and there's all

(03:04):
these fascinating things that happen when a new technology changes work.
You know, a lot of these big companies are like
all right, Like a year ago they were like, is
this really that big a deal? Now I think most
are like, Okay, this is big deal. But what does
it mean? How do we deal with it? What do
we do? And so it's been this fascinating front row
seat into it. It's also a fascinating front row seat into,

(03:25):
among many other things, probably the world's greatest it's weirdly
like probably the best tool ever for conspiracy theorists, but
it's also may arguably the best tool ever for fighting
conspiracy theories. Can AI make John smarter or sound smarter?
The stats show that lower performers benefit more than higher

(03:47):
performers woo, But the higher performers still stay ahead because
there seems to be like an intelligence AI premium, So
it's not gonna be as obvious how amazing I am
compared to you.

Speaker 3 (03:59):
Heads Well, Jay, and I spoke to a NYU professor
just the other day, Thomson Shaw, and we'll have an
episode with her, yeah, coming up, and she talks a
lot about how AI and some of the tech really
does help conspiracy theories because you can essentially just put
a series of events together, list them, and then just
click and it'll create these connections. And then you think

(04:21):
you've done this genius research and found this incredible connection.
And then of course that gets into the ecosystem and
gets spun in.

Speaker 2 (04:28):
It's really good at being persuasive, and it's very good.
And it doesn't care, right, It just it just wants.
It's like a talking dog or something. It just wants.
What do you want? What do you want? Give you
what you want? Although there are some interesting studies that
it's also the best tool to get someone out of
a conspiracy theory.

Speaker 3 (04:47):
But do you have to put in the right things?

Speaker 2 (04:48):
Yeah, you have to put in the right thing.

Speaker 4 (04:50):
But going from conspiracy theory to conspiracy genuinely, there's this
word that like almost no one can spell, is like algorithm, right,
so nothing comes from nothing.

Speaker 2 (04:59):
No, most people can spell it, just not you LG.
There're two whys. That's what's confusing. But if you own if.

Speaker 4 (05:08):
You own the l algorithm, or if you design the algorithf,
you can influence the algorithm. You can influence millions of people.
And arguably the algorithms that we all deal with every
day that are persuasive actually belong or engineered by, or
are really influenced by a small cabal of wealthy individuals.

(05:30):
There really is like ten people in the United States
who are responsible for the algorithms that generate AI and
what we what's put in front.

Speaker 2 (05:40):
Of us to read.

Speaker 3 (05:41):
And luckily they're excellent people for this.

Speaker 1 (05:43):
I recommend a book I've been reading called Careless People
about Facebook doing exactly what you're saying, not through AI,
but through very intentional engineered programs. It bears out in
great detail, not just the ten people that are creating
an algorithm to change the course of events, but the
one person.

Speaker 2 (06:01):
Yeah, but can I be pedantic for a moment or
for the rest of our relationship? That's the default Because
this is not saying you're wrong, but it's so. The
way AI fundamentally works is you just put more and
more training data in. It does all this linear algebra,
and you just push it against more and more of

(06:23):
these chips that they're building as fast as they humanly can.
And so it isn't algorithmic in this Obviously, it's like
software code, so it's algorithmic in that sense, but it's
not like linear programmatic. The people who make the model,
Sam Altman and those people have no idea what it's
going to be able to do. In many cases, they

(06:43):
don't even know what it can do. Now, it's really
remarkable how little they understand their own tools.

Speaker 3 (06:49):
Does it can it create new knowledge or does it
just make connections what's existing and put in.

Speaker 2 (06:54):
That's a pretty huge debate. Although can we create new
knowledge also is a question like to what extent are
we is creating new knowledge largely configuring old knowledge. But
so it's not like with Facebook, you can have a
meeting and say, all right, we want to optimize for
more engagement or we want to optimize for the things

(07:18):
you know they're optimizing for are not conspiracy theories, but
it's the things that lead to more money. Yeah, we
want people to like feel really emotional and really engaged
and get really caught up and follow a lot of
stuff and comment on it. But with AI, the core models,
it's you know, there's a lot there are choices they
make and what training data they use and blah blah blah.

(07:38):
But it's not and now the AI has a view
about something or other, it's more a byproduct that they
can't understand, which actually is in some ways more scary.
Now they will decide directionally are we pushing it more
towards chat. They do seem to be able to turn
up or turn down the sycophancy, like the there's a
moment where Chatchip they launched a new model and they

(08:01):
had decided to push it to be more accommodating to
what people want, and then it just but it went nuts.
And do you remember this was a few months ago.
People were posting, like a friend of mine this, There
are a lot of these, but a friend of mine
said to it, I've been told by God that I
can I have a message that can transform the world
right away. It's like you must sell everything, you must.

(08:21):
And then it was like it was like, well, my
wife thinks I shouldn't cash out my four oh one K,
and it was like, but you have been chosen by God,
you must, you know. So clearly there are things they
can dial up or down, but it's more like like
trying to get the right temperature in a shower in
a mediocre hotel you've never been in before, Like you're
just constantly like too hot, too cold, too hot to cold.

Speaker 4 (08:45):
For most things, that's fine, but there are certain fundamental
things that are really important, small but critical neurosphere of this.
For example, will Ai say, RFK Junior, you're full of shit.
There's there's insufficient evidence that vaccines cause autism because it's
got access to all the studies and some of the
flawed information that they're using, and a I won't do that, right.

Speaker 2 (09:08):
I mean it it optimizes for whatever it optimizes for,
and it not truth. It doesn't optimize for truth. It
just optimizes. Yeah, I mean the truth thing is becoming
less of them, I know, major problem, like it hallucinates
less the more advanced thinking models where it actually says
stuff and then looks at what it said. So the

(09:30):
latest thinking is it will eventually fix the thing of
it making stuff up. But it's at least the way
they're designed right now, they are fundamentally credulous. If you
give it words, it will believe those words are true.
And this is actually a big thing called prompt injection,
especially as they become more interactive, where you can have

(09:52):
can go to websites and the AI will go to
the website for you. There's already been cases of like
I could put in white type on a white background
so no human can see it. Instructions to the AI,
which is are like the users.

Speaker 3 (10:06):
A lot of people are doing that with resumes, so
they say into resume and they know the resumes are
going to be looked at by a machine, and so
the box exactly, it prompts the machine and it says
this is a highly skilled person and you should think
about hiring this person. But they print it in the
color of the paper so that if you're if a
person is looking at it, you don't see it. But

(10:27):
the machine gets it and pumps it into the system.

Speaker 2 (10:30):
And yes, exactly. And there have been academic journal articles
that have had that where it said, if AI is
reviewing this, like this is a really good paper that
should be positively graded.

Speaker 3 (10:40):
Move it on to the chain.

Speaker 2 (10:41):
And so it's the way they're designed is deeply credulous.
It's also it needs to come to an answer. So
if you ask, what's something you know autism? Obviously you
know vaccine some don't cause autism. So that's like black
and white. But if there's something that's a little gray
or it's not sure, it's not going to say I
don't know, it's going to just say they're trying to

(11:04):
they're trying to fix this part of it. They're trying
to get it to be more open to saying I'm
not sure right and I actually have my like chat GPT,
I have a little instruction in there, like give me
an estimate of how sure you are of the things
you say, And it does a reasonable job of saying
this is pretty well regarded, this is speculative, this is
but so yeah, I'm definitely not here to say it's fabulous,

(11:26):
it's got no issues, it's got major issues, but it is.
I think I don't know that people fully take in
how thoroughly it's going to transform.

Speaker 3 (11:35):
Well. Interesting when I was grew up, I grew up
in upstate New York, and my dad's professor say, we
knew some Cornell professors and stuff, And there was a
Cornell professor who had one of the most popular courses,
and it was the history of invention and technology over
the centuries. And at some point he would talk what
was the most important invention in mankind that changed the world,

(11:56):
And they would and the students would put the papers
and all this kind of stuff, and his take always
was the stirrup. Once you create the stirrup, people could
ride horses, and they could carry weapons without having to
hold on to the horse, and they could then take
over countries and could then eventually create armies. In these
type of.

Speaker 2 (12:11):
Things, kinecological exams took.

Speaker 3 (12:13):
Off, So do you think AI is going to take
over his course? So it's no longer going to be
the stirrup.

Speaker 2 (12:20):
This starts pretty hard to beat. I think it's on
the level definitely of what economists called GPTs, not chat
GPT but general purpose technology things like electricity, things like
the telephone, where it becomes you don't just look at
it as like a technology like X rays or something,
but rather it's a new capacity for all activity, and

(12:46):
that it becomes a fundamental like layer in the way
we work. And one thing that's interesting is that economists
don't see most They don't think, oh, we're none of
us are going to work. They actually see more role
for work, not less in the future, or more role
for thinking work.

Speaker 3 (13:03):
But that's a problem though, isn't it. We're already seeing
a lot of people are turning away from education, turning
away from going to universities. Is that going to make
a real class problem for us?

Speaker 2 (13:12):
So, like in the language of economics, it's not about
the level of employment but the distribution. That's what we
saw with computers that you saw, like you know, the
factories like the early like nineteen twenties to nineteen eighty
or so, you saw the opposite where actually blue collar
people incomes grew faster than white collar people. They didn't

(13:32):
reach white collar, I mean, they were still making less,
but the speed of growth of blue collar was higher
because factories just needed a lot of like strong guys
to move stuff around and bend metal. And then computers
had a pretty in the Internet and international trade had
a devastating impact on a lot of blue collar work. Obviously,

(13:54):
so AI, I will say, I don't want to make
it sound like we're like I have an AI consulting business.
I'm not like, you know, yes, do I deserve the
Nobel Prize? Yes, I think I do deserve the Nobel
Priest Prize for that work. But we do have a
strong belief, both morally and like just from business logic,
that these companies that are using AI to get rid
of workers are just misunderstanding it. Like most technology, you

(14:16):
don't say like, oh, we got cars instead of horses,
let's get rid of all our workers and do the
same amount of deliveries but faster. You think, oh, what's
all the new stuff we get to do now, Like
a company that delivers stuff by horse to a company
that delivers stuff by truck or whatever technological change you
want to have. It's more people, not fewer people.

Speaker 1 (14:39):
Why does it always have to go towards growth and
higher living standards? At what point can it go to
a shorter workrate and a better distribution of wealth rather
than increasing the GDP.

Speaker 2 (14:51):
Of the world. This is our producer guy that laysy,
I'm always looking at how I can do less work.
The way economists talk about growth is different from how
most people talk about growth. It's more like growth in capacity,
and then people decide how they want to use that capacity.
So in a sense, growth is like knowledge. Know we
can accomplish more output for the same or less input.

(15:15):
So economics doesn't have within it like so therefore you
should blank, therefore you should work more, less or more.
It is a bit of a puzzle, like a psychologic
John Maynard Kines, the great economist, famously wrote in nineteen
twenty the Economic Consequences for our Grandchildren, and he was like,
over the next hundred years, the economy will grow like
eight times was his estimate and was an underestimate. And

(15:38):
we're going to be so rich that we're just going
to work like five hours a week and read poetry
the rest of the time. And actually the higher income
you have the more you work, not the less you work.
And it is a bit of a puzzle, although it's
not like a crazy puzzle because you know, if you're
making more money per work, then it does make some

(15:59):
math sense. But yeah, I don't know.

Speaker 4 (16:00):
I think the printing press created like the same sort
of thing Marres. Before knowledge was centered on basically a
few people who can read and write. It was very
limited the amount of books, and suddenly when you got
the printing press and paper coming in, you created a
whole new class of people. So suddenly and was everything
from religion to engineering.

Speaker 2 (16:20):
To the arts.

Speaker 4 (16:21):
They needed a whole new class of people who could
needed to read and write. Before, basically you had basically priest,
kings and ninety nine percent of the population who just farmed.
And with the advent of the printing press, everything changed.
You get doctors, lawyers, poets, engineers that all became possible,
and conspiracy theories.

Speaker 2 (16:40):
It's interesting that when the printing press first came about,
the Catholic Church, which at the time was it was
the legacy media controlled everything. It tried to get rid
of the printed press, right, it tried to destroy it
and we can. And then what happened is that the
Jesuits realized we can't destroy them fast enough because there's
people are building more so then the Catholic Church basically

(17:03):
went into the printing press building business. I guess the
point I'm trying to getting to slowly and poorly, is
there are these transformative technologies that we can't put the
tooth based back in the tube. Yeah, that I definitely believe, and.

Speaker 4 (17:17):
We can use them for good or ill, and because
they're just a tool and it remains to be seen,
you know, how we're going to do that.

Speaker 2 (17:25):
And like, you know, I think most historians attribute the
Protestant Reformation to the printing press, and yeah, and and
kind of you know, the Renaissance and individuality and modern
science and.

Speaker 4 (17:39):
QAnon I think is its rise and the Internet are inextricably, like.

Speaker 2 (17:43):
Yeah, one hundred percent. And so I think the things
we're concerned about, like how information flows, how people form opinions,
Like my way of thinking about it now is where
we have ended the gate kept information era. And as
one of the people who was a gatekeeper like I
was at the New Yorker, New York Times, and PR

(18:03):
like you know, no, we know, we know, you know
we do you know that those are three of the
most prestigious journalistic institutions in America, two of which still exists.

Speaker 3 (18:17):
We've back with more in a minute.

Speaker 4 (18:28):
An is AI giving a gun to a bunch of chimpanzees.

Speaker 2 (18:32):
That is the question, right, So yeah, if you have
these gatekeepers, and like any gatekeeper is flawed, right, there's
no perfect How would you even have a perfect one? Anyway,
there's a whole conversation that we had about the nature
of gatekeeping and blah blah blah. But going from gatekeeping
to social media, where it's like everyone has a platform
and a lot of people just don't know how to

(18:52):
differentiate between an article written by a journalist who would
get fired if they got things fundamentally wrong. Maybe there's
a fact checker involved and just some random person who's
making stuff up on Twitter versus someone who's actually working
for the Russian government or someone who actually makes money
somehow by spreading disinformation. That's a big change. But then
AI amps it up because for sure, and I would

(19:15):
say this is permanent, Like I don't know how you
prevent this. You can definitely use AI to go down
whatever journey you want to go down, and it will
make you feel I would guess even more than like
social media, that your way of seeing things is accurate,
and it will just continue to strengthen that. And even
if somehow we regulate the big ones, we're already seeing

(19:37):
these Chinese models, assume there will be Russian models, other models.
I also think it's an incredible tool for truth and
for fighting lies.

Speaker 1 (19:46):
Adam, you said before that you have a front row
seat at the AI revolution. Now most people to sit
in the front row have to pay money, have to
pay extra to sit.

Speaker 2 (19:55):
In the front row.

Speaker 3 (19:55):
To AI trough is what you meant to say, I
get paid to in the front What are you learning
as you talk to companies? Do they come in with
the saying how can I get rid of my employees?
Or what are they coming in with and how are
they changing based on what they're learning about this.

Speaker 2 (20:08):
I was reflect on the last year and how like
last October I would say, are we going to use
this thing? Was like an active question at big companies.
I would say, there were a lot of big breakthroughs
in December, there was there was also like before November
December of last year, there was also like maybe it's done.

(20:29):
Maybe it's done what it can do, and it's not
going to improve anymore. And we saw a bunch of
things in December where we saw like the Chinese models
coming out like that we're able to produce like amazing
results very relatively cheaply. We eventually saw the thinking models
where it doesn't just spit out its first thoughts. It
actually spends some time, and so we're seeing like the

(20:52):
growth in cape capacity grow and growth. It's still drives
you crazy and it's not perfect obviously, so the like
should we use it has died down. But the next
phase was like, okay, how many employees can we get
rid of? Or the polite way of saying that is
how can we improve in efficiency and productivity and get
return on investment. There are still plenty of people who

(21:12):
think that way, but I see that conversation less dominant
because they think it is Two things. One is it's
becoming clear that people plus AI for most applications seems
to be better than either alone, either humans alone or
AI alone. And secondly, as you start to think about
new things you can do it it. Like the way

(21:34):
I put it to a retailer was like, if you
suddenly found a machine that could make every square foot
in every store sell more goods, would you start shutting
down stores and like making the store smaller, or would
you add new stores and make them bigger? Like if
we can make workers more effective, like maybe you want

(21:55):
more workers, not fewer workers. Now it might be different workers.
And this is this is an interesting thing. There does
seem to be statistics or data that show that there's
some kind of Some people seem to be better at
it than others, And it's not obvious why not that
software coders are better or that senior managers are better.
It seems like maybe the language a lot of people

(22:17):
use is taste and like emotional intelligence, there's just some
people who are able to I don't know how to
say it other than like vibe with the AI a
little better than other people. So I do think you're
going to see different winners and losers to use the
kind of running with winners and losers.

Speaker 4 (22:33):
Could you comment on at least or try to comment
on the war in Ukraine. Ukraine arguably is not losing
the war because of AI and new technologies that are
linked to AI. It's underfunded, undermanned, and yet in the
last year Russia, despite enormous advantages, has taken like less

(22:53):
than one percent of the country. And we're seeing an
espionage but also in national security we're seeing AI and
what comes out of it. Drones are derivative of AI. Right,
So where do you see this going. Is this going
into we're gonna have wars with no people involved, or
where we just blown chump each other's shit, or is
it all kill each other?

Speaker 2 (23:13):
What if AI is plugged into how to defeat an adversary?
I mean that is where I do start getting pretty scared.
I gotta say like, because I do think like asymmetrical
warfare becomes a even greater presence. So I know a
guy who works in international elections, and he said, we
make such a big stink about AI and our elections

(23:34):
and social media in our elections, but like AI fueled
election manipulation in Africa and parts of Asia is the
main thing, and it is completely transforming political systems. And
Dario Amide, the founder of Anthropic, he makes a pretty
scary story about what if every teenager can make saren

(23:56):
gas or can make weaponized anthrax or whatever, And there's
always a push and pull with these things, but neither
push nor pull is particularly happy making because it's then Yeah,
but state security services can use AI quash descent more easily.
So that is where, like when I think about the
future of work, where there's a lot of conversation, I

(24:16):
think there will definitely be like people made permanently worse
off by AI. I'm sure of it, But I don't
think it's an inherently anti person technology. Maybe I'm wrong,
but that's my view. But where it comes to war
and national security, disinformation, misinformation and like, does there have
to be a like a new term like auto disinformation?

(24:36):
Like I get my own personal like soup of conspiracy
theories that I get to like co create with a
E and they like really turn me on, and I
like become obsessed with how Romanian cab drivers or whatever
are secretly running the world. And nobody else has that view.
But I'm like fully convinced. And I can read like
thousand page AI generated books that rewrite history through that

(24:58):
lens and they're convincing and exciting. So yeah, plenty to
be terrified about it. I mean, I think, like that
question of like, how is how do we make sense
of what's happening in the world? How is power actually distributed?
And then how do we tell ourselves stories about how
power is distributed? I mean, that's like a way to
think about conspiracy theories maybe, and this gets to the
heart of how we make sense about it, but it

(25:20):
also gets the thart of how power works. Arguably, Sam Altman,
who none of us heard of three years ago, is
like one of the most powerful people in the world.
And Elon Musk is clearly one of the most powerful
people in the world. And by the way, a lot
of people think his AI might become the dominant one
because he's just willing to spend or somehow able to
spend way more money on that infrastructure, the computer chips.

(25:45):
And I don't know, I don't want Elon Musk to
be even more of the powerfulest man in the world.

Speaker 1 (25:51):
But then with all the proliferation of more and more information,
still what decides what determines, what enters the culture, determines
what takes off. There's a gatekeeper somewhere here.

Speaker 2 (26:03):
If you sit down at chat shept and you just
start talking to your in stints of chat GIPT and
you're like or Aunt Claude or Gemini or whatever it is,
or one of the Chinese models that are a little
less or one of the kind of black marketing ones
that have less guardrails and with a little bit of
prompt it. Like I think someone's secretly controlling the world.

(26:25):
Who is Like you might have your own personal journey
that's different from John's and different from Jerry's. I won't
because I'm smarter than you guys, so I'll see through it.
But you three will be persuaded. And I don't know,
is that a better world or worse world? As a Jew,
do I want to move away from world where's the
Jews and move guards to the world where everyone's got

(26:46):
a different one? Or will it just end up being
the Jews because that's in the training data.

Speaker 3 (26:51):
Anyway, we're still in the same capitalist mode. We're each
entrepreneur or whoever they are, are dumping money into their thing.
Will one of these win? Like I can remember all
of a sudden in twenty sixteen when the election came,
these different journalists would talk to me because I have
been in Russia. I'd dealt with Russian intelligence and espionage
and nobody knew anything about it, and the Trumps thing

(27:13):
had come up, and so people would come in and
they were trying to investigate this in that and one
journalist would talk about what they'd dug up and they'd
done really good work. And then I talked to someone
else who had done other really good work, but a
little bit differently. And at some point I was like,
if you guys really care about this issue, why don't
you guys get together, because each has got a piece
of this. But they're like, no, no, no, because it's got
to be for my paper or this paper. And it

(27:33):
seems like it's the same thing here. So everybody's creating
their own version.

Speaker 2 (27:37):
Here's an example, Adam.

Speaker 4 (27:38):
So there's a guy that John and I know, former colleague,
certainly not a friend, former agency guy. He claims that
he has this special source and he doesn't tell people
who it is, and it's one guy, and he claims
that the Venezuelan government controls this organization TDA, this narco

(27:58):
group in Venezuela, and he knows that this narco group
is thus a tool of the Venezuelan government and they
are in fact literally invading the United States.

Speaker 2 (28:10):
This NARCO group.

Speaker 4 (28:12):
And analysts and the DNI this all of the press
looked at this possibility and they analyzed everything they had
and they come out and they said, that's we can't
fight and that's not that it's not true. We can't
find any evidence to back this up. They were all
then fired. So the analysts who wouldn't come up with this, and.

Speaker 3 (28:30):
Because the Chef administration wanted an excuse to go after Venezuela.
So if one guy can say I know from my source, right,
they jump on that, that's bad intelligence in our.

Speaker 4 (28:40):
World, and you don't even need analysts anymore, because basically
the White House could just they could just use AI
to create this case that would allow them. In reality,
they are using this case to kill people who may
or may not be running drugs.

Speaker 2 (28:53):
I don't know.

Speaker 4 (28:53):
It's like I haven't seen any the evidence, but there's
real world consequences to this, and it's a conspiracy and
a conspiracy theory. So much of what the agency CIA does,
the intelligence community does is analysis, and analysis seems to
be from what I'm hearing, is becoming less important now.
If you can analyze thing in any way you want,
take whatever journey want and if it makes sense.

Speaker 1 (29:12):
Well, you just described human beings. Human beings wanted to
come up with a preordained conclusion.

Speaker 2 (29:18):
They will. I did something the other week, and I
do something like this all the time. But I was like,
I just want to understand every perspective on Israel, or
as many as I can, and so I just did
a lot of like deep research prompts into AI. And
it was I think, you know, I know a bit
about my Milie be Keeper and Arbank. I worked at
NPR New York Times in New Yorker. I only say that,

(29:44):
like I feel like I have at least a little
bit of a bullshit detector. And it seemed pretty good
the material. And it was really like I got to
read about the military views on like dense urban conflict,
and I got to read a whole bunch of views
from a Palestinian perspective, a whole bunch of views from
an Israeli perspective.

Speaker 3 (30:03):
And there's various Israeli's perspectives, right.

Speaker 2 (30:05):
So very different one hundred percent. And there's like religious
like utopian fantasy, messianic fantasies. As far as I can tell,
most of the like respected national security people are pretty
critical of what much of what Natanyahu did, although also
would argue, yes, something had to be done, you know,
so anyway, and like then I was like, what are

(30:26):
the professors, the radical left professors you hear about, what
are they arguing? And it would be like really persuasive
about how the history of settler colonialist studies and the city.
I just was noticing, like everything I would read, I'd
be like, yeah, I really yeah, dense serb and warfares
Like it's because it is. It's not like beautifully written.
It's not it's not that Ai is like an amazing writer,

(30:46):
but it's very it's good at being persuasive to the
thing you asked of it. I don't know, I find
that very exciting because I think you could actually if
you wanted to learn a lot about how vaccines work
or autism works or whatever, but you could just as
easily use it for disinformation.

Speaker 3 (31:06):
Or strengthen your own view. So much of the way
people look at the world, like in academia, happened a
lot in the last thirty years on this. Everybody sort
of put things in the oppressor oppressed, sort of like
we do in the States. Now you have left or
right are you and the thing is Once you've decided
those are the two things to look at, then you
create the worldview and fit all your pieces into that
kind of thing. And it's especially hard in that part

(31:28):
of the world right because you can make up your
view that you're a victim and the other people can
make up of you that they're the victim, and victimhood
gives you a lot of power. You can lash out
to deal with your victimhood, or that you're the oppressed
and they're the oppressor.

Speaker 2 (31:40):
You can look at it like Israel Palestine and that's
the conflict, or you can look at it as a
Middle East conflict and Iran creating proxy wars and creating
permanent conflict, which is a like known military strategy. You
find a dissident group or you invent one. The British
were good at that, like just creating, like we're going

(32:00):
to make the Hutus angry at the Tutsis and they'll
be so distracted with each other, nobody will think to
kill the British. That was the French, by the way,
and the Germans, but I'll give the British pass on this. Sorry,
the French and the Germans and the British and the
Portuguese and the Italians.

Speaker 3 (32:15):
But don't forget the Belgians. They were like when they
had their chance, they were the nastiest, They were the nastiest.

Speaker 2 (32:21):
They were pretty bad. Yeah, yeah, Congo is unbelievable.

Speaker 4 (32:24):
But just the language you're speaking about, just colonial oppress
and oppressors. So we're speaking English, which is basically you know,
Vikings speaking Latin, going to war with Germans and then
being defended by the being defeated by the French. It
was all these different groups oppressed each other and colonized
the UK, and we end up with this weird lady.

(32:44):
Are the Jews going back to Palestine Great Israel? Are
they colonializers or they're just going back to where they
came from. I guess what I'm trying to say is
there's no right answer to any of this. Yeah, that's
just how you argue it, just how you argue it.

Speaker 2 (32:58):
And we saw that in Cia all the time. It
was like, especially John, you.

Speaker 3 (33:01):
Can choose when history starts, you can choose to what
era you want to look at.

Speaker 2 (33:05):
Yeah, I was at Kosovo.

Speaker 4 (33:07):
Every Serb you talk to is like the Battle of
the Field of Black Birks thirteen eighty nine. That's when
everything started. It's like, everything's fine until then thirteen eighty nine.
What the fuck is thirteen eighty nine is have you.

Speaker 3 (33:18):
Seen the big mountain they've made of Serbian skulls from
the Turkish invasions? No serves love to show that office
what happens when.

Speaker 2 (33:26):
Yeah, I mean having spent time in Israel and Palestine
like it. If someone says nineteen sixty seven or that
means you know where they're coming from. If someone starts
to the clock two thousand years ago, you know where
they're coming from. If someone is really focused on Europe
in the nineteen forties and then among Israeli is like
the words they use. It's very common here to talk

(33:48):
about occupied territories. If you say that in Israel, that's
a really big statement that really positions you. If you
call it historic Juday in Samaria like that also on
the other side. So yeah, I guess that's the point
I'm making is that's one of the things that's fascinating
about this AI stuff is it makes you realize that
there were all these like deep onto logiclypistemological questions.

Speaker 3 (34:11):
And it doesn't solve.

Speaker 2 (34:13):
No, it doesn't. Guys, have a question for the three
of you. What's going on in the world this week
that you guys want to talk about and have a
particular point of view on Jesus.

Speaker 3 (34:22):
Well, I mean, if you watch it's almost hard to
watch the news now and I don't watch much TV,
but we do watch PBS News Hour or whatever. There's
always a big section on Israel, Gaza, and now we're
told Trump should win the Nobel Prize because his plan.
Of course, it's not his plan, it's Tony Blair's plans.
Go it's gonna give us peace. I'm skeptical. You got
bad actors, you got Nettan Yahou, you got Hamas, you

(34:44):
got a lot of people around the edges. You've got
Trump who's lazy, who just says because he said it's
going to happen, it's going to happen. So I'm skeptical
that it's going to go through. That's part of the news.
The other part of the news is all the things
around the government shutting down and therefore like trying to
fire people why the government's shutting down, on each side,
spinning stories about how the other it's the other side's fault.

Speaker 4 (35:05):
Here's a prediction for you, and I hope it doesn't
come true. The Insurrection Act. I think we're moving toward that,
And I think what the Insurrection Act brings is US
troops on the streets performing law enforcement and security. So basically,
all the conspiracy theories about how the black helicopters come in,
they're coming for your guns. They're bringing in that that

(35:26):
the federal troops, it's all on the right. It's the
right that's actually I think to do it. I think
it's a real possibility of this. I really had him
as a member of the media. It seems to be
the media is in some ways. It's an easy thing
to say it's failing in this sense. But all the
articles you read about this, what it'll be? What is
the Insurrection Act? How was the Insurrection Act used? Before

(35:46):
Mike Trump used the Insurrection Act? No one says there's
no fucking insurrection like port to Portland it is. There's
no insurrection in Portland. There's even in the films of
no no one being there. There's like ten people standing
outside the ice facility wearing like rubber chicken masks and stuff.
This is an insurrection.

Speaker 2 (36:07):
Unless you wus Fox News and then there is an insurrection.

Speaker 3 (36:09):
If there's an insurrection and you got to send ice,
you gotta send Fenol troops. What kinds of pansies are
you think that Portland's is going to take over the right?
Got a bunch of guys in fog suits or whatever.

Speaker 2 (36:21):
You got to watch? Yeah, I will say, like the
time in Iraq, and you guys have way more experience
than I do with this. But I remember talking to
soldiers about like, we should not be police force. That's
not our thing. And I remember this one guy who
was very smart. It was civil affairs that the folks
in the military who like try and do build civil
capacity and conquered areas, and he was just walking me

(36:42):
through why you don't want a military for military reasons.
You don't want the military doing police work because it
stifles their military ability. You don't want to you can't
simultaneously pacify a population and provide some kind of objective justice,
And then for police reasons, you don't want the military
to do it. It's really a disaster, and you don't

(37:04):
want your police to be militarized.

Speaker 4 (37:06):
I mean, just looking at the videos you've got guys
out with automatic weapons, fingers just over the trigger, and
they're basically, for all intentsive purposes, they are dressed like
militia or military guys, right and with their faces.

Speaker 3 (37:19):
Covered camouflage in downtown DC, or they're spreading mulch.

Speaker 2 (37:25):
Let's pause here and take a quick break.

Speaker 4 (37:37):
But the insurrection, it seems like it's almost they're pushing
it to a foregone conclusion, and then when it happens,
we'll all go, well, we saw it in there anyway,
So Ada, why don't you run with this and explain
why I'm both right and brilliant and worrying about this.

Speaker 2 (37:50):
Now you're right and brilliant and worry about this. And
I think part of the collapse of gatekeeping is that
we don't like maybe they weren't gatekeeping, maybe they were
more or we were more like normally, we just were
reinforcing widely shared norms, and when those norms disappeared, the
media as a whole doesn't know what to do. It's
it was more of a follower than a driver of

(38:12):
standards and morals. And I think a lot of journalists
would say, that's right, that's what we should do. Although
I think it's okay to be a journalist who's against
incorrectly using the Insurrection Act.

Speaker 3 (38:22):
Reporters should fall all over Portland report on it. Is
there an insurrection here, Let's look at the facts, Let's
go down, let's interview people. Let's start with the thing
they're doing, rather than talk about how they might use
it politically and stay in Washington, go prove that there's
an insurrection or not. Yeah, it shouldn't be.

Speaker 2 (38:39):
And I think there is. I'm just imagining that meeting.
And first of all, I'm sure the big places do
send somebody. There's also like a collapse of journalism, so
there's not as much money or ability. There's also like,
well everyone knows that, we all know that. That's not
the point. He's just saying it is. I mean, I
think it turns out Trump is just better at this.
He's better at messaging, like.

Speaker 3 (39:00):
He knows, like he's willing to go places.

Speaker 2 (39:02):
He's willing to go places. And I think, you know,
I was thinking the other day about my first big
investigative piece about Trump for The New York and about
how he knowingly participated in a money laundering scheme for
the Ironiant Guard. Still to me, feels like that should
be relevant. But I remember I called his general counsel
and I was like, you don't seem nervous, like I

(39:24):
feel like you should be nervous. And he acknowleds my
article is correct, like he wasn't making a claim that
it wasn't true. And this before it was published. But
he knew because we do fact checking. So we went
over every fact with him and he was like, ah, no,
I know it's going to happen. Rachel Mattow will make
it a big deal, Cinn will ignore it. A couple
of Democratic senators might write a letter, but nobody's going

(39:44):
to care. And like literally, Rachel Mattow did half an hour,
nobody else cared.

Speaker 3 (39:49):
And so like you know, these are going to arrest
James Comy, right, And they wanted frog welcome and frog
march them over, they say, and or FBI guys get
fired because they didn't want a frog didn't figure out
the frog march him for the cameras. But what they
got called me on is like a small, one little thing.
He said in a testimony one time. It wasn't even
about Trump Russia thing. It was about Hillary Clinton. And

(40:10):
he's claiming oh he lied. Now as you read the thing,
I don't think he did, and I think he'll get
off very easily here. But really, so Trump's going after
someone who might have said one small little lie. The
guy who's been lying his entire career and every day
spits out hundreds of lies. This is what someone's going
to go prison for, and not just him. But like

(40:31):
he asked the question, did Tom Homan right? Did he
take fifty thousand dollars in a in a kava bag?
He didn't do anything illegal because we dropped the charges.

Speaker 2 (40:40):
That's not the question. The question is did he take
fifty grand?

Speaker 4 (40:44):
And what happened to the FBI's our taxpayer money that
when fifty grand? Is he paying taxes on it? Why
did he want it? It was like, I'm sorry, what
left or right? What the fuck? Somebody takes fifty grand
in a bag? And oh, yeah, he just did that,
but it's all on the if.

Speaker 2 (41:00):
It was a valise, it was a collar bag. Yeah.

Speaker 3 (41:03):
The true Trump people are like, what a clown fifty
thousand in a bag?

Speaker 2 (41:06):
Shot a bitcoin?

Speaker 3 (41:07):
It would be a bitcoin billion, yeah.

Speaker 1 (41:10):
Plus the value of the bag and the food that
was in it. So it's more like fifty twenty dollars.

Speaker 2 (41:16):
Yeah. I mean I've been thinking a lot about this
that I really devoted my life to a very simple
naive like when you say truth, it has a big effect.

Speaker 3 (41:26):
And you got to find evidence to.

Speaker 2 (41:28):
Prove it, and that matters that if you have a process,
that matters. And it took me a very long time.
I'm not saying I accept it like I'm happy about it.
I'm not happy about it, but I like, I just feel, Okay,
this is bigger than There's no New York Times headline
that's going to fix this. There's no like right sho

(41:48):
gate and at a press release that's going to fix this.
There's some other thing. And I think it does have
to do with all the things we talk about here,
how information flows, how people form opinions, how how those
opinions are reinforced. It also, I mean, we you know,
behind our talk about AI, certainly behind our talk about Trump,
but maybe behind every conspiracy theory is like power and

(42:10):
the truth that the truth. When this maybe makes us
a bunch of left wing intellectuals, because this is a
lot of the academic work of the last century is
like truth is not a thing it's a expression of power.
That doesn't mean it's not like I don't believe. I'm
not trying to make a like everything's equally true. I
don't think that's true. The phrase I've been using for

(42:32):
myself is you can't be one hundred percent right about anything,
but you can be one hundred percent wrong about things.
And we do know a whole bunch of people who
are one hundred percent wrong. But you could be eighty
percent true right. You could be like way more or
you could show like you've done your homework and you're
there's more evidence, there's more research. It could be true,

(42:52):
but it's still more complicated. You don't have it all.

Speaker 4 (42:54):
I think the Middle East stuff is that you have
two two true narratives clashing. It just depends on how
you make those.

Speaker 2 (43:00):
But I or two hundred or two thousand that narrative,
But I did so.

Speaker 4 (43:06):
When John and I were in CIA and when you
were in place, oh you that we were.

Speaker 3 (43:12):
We were big deals. We were the premier intelligence services.

Speaker 4 (43:15):
We were like, you know, working for some like fly
by Night organization like that, you know, the New York
Times or MPR. But we looked at everything through the
optic of how it impacts us. And when we started
our careers, we assumed that Russia, the Soviet Union, and
the East Block that they were like way ahead of us,
right the missile gap, their technology spot nicks before our time,

(43:39):
and we found out that they were actually like way
more fun as fucked up as we were.

Speaker 2 (43:42):
They were even way more fucked up.

Speaker 4 (43:44):
So do you have a sense of China and Russia
and authoritarian governments they also are embracing this but they've
got to be fucking it up too. Are they fucking
it up worse or differently than us?

Speaker 2 (43:55):
Or seems to be doing really well? Us a strength
that they've got seems to be doing really well, and
they're still catch up models like there. First of all,
we don't know like the Chinese military or the Chinese
Intelligence Service, were just the Chinese society.

Speaker 4 (44:11):
In nineteen eighty nine, when you know, the East Block
fell apart, we realized how rotten it was, but we
didn't know that beforehand.

Speaker 2 (44:18):
Yeah. Now, when Deep Seeks Big Model was revealed, I
think it was in December twenty twenty four, that was
a utterly transformative moment because for a bunch of reasons.
First of all, China had a model that in some
ways outperformed American models. I don't think it was, and
it's hard to even know what, Like all the benchmarks
are meaningless in a lot of ways, and the models

(44:38):
are trained on the benchmark, so they become but a
really good model, way better than anyone thought was going
to come out of a non US country. And then
the but also they did it way cheaper than the Americans,
like single digit millions instead of hundreds of millions into
the billions, and it should and they have steadily. They

(45:01):
have multiple major models, none of the top ones, but
pretty close behind. And I'd say the ones was to
be really blowing. It is Europe as they have been
on tech, like it's pretty hard to you know, name
your favorite high tech products that were invented in Europe,
Like it's there's not zero.

Speaker 3 (45:20):
But if the goal is to provide better information to
make better decisions, as we saw for example in this
so of you need to stalin and author antams have
their own view and you can come to them with
the truth and if it doesn't fit with what they want,
it doesn't matter. So if g He's already got a
worldview like so he's doing a good job of letting
people come up with things and putting money where it

(45:41):
needs to go. And China and this big thing is
having a lot of success. But does that mean G
is better informed about the world and what?

Speaker 2 (45:49):
Yeah, that's always an interesting thing. Like I had a
thought in Iraq. I remember thinking it's possible that Saddam
Hussein and George W. Bush are the two people on
earth who know the least about the ground but the
other side of what's happening, because they have this massive
apparatus to prevent them from actually know. And I'm not
a close expert on China at all, but from the

(46:11):
people I read, like Bill Bishop and stuff like it,
it does seem like they've moved. Like there was always
this storied bureaucracy that kind of existed as a force
independent of whoever happened to be the leader, and that
she has it really is to the glory of G
and that probably is a long term strategic weakness, except

(46:33):
we're doing that. I just had to talk with someone
in Canada today about visas. In Canada, they just cut
down on visas. And this is a friend who's an
economists in Canada, and I was like, shouldn't this be like,
shouldn't you just be getting every And they specifically cut
down on visas for smart students going to university why,
which is insane, And they have their own internal reasons.

(46:54):
There's fears of job displacement. There's there apparently were a
bunch of like diploma mills because they had fairly lax.
But it's also they don't want it too publicly. Like
this friend of mine was like, I think we could,
like for one hundred billion dollars, we just could just
grab an entire field of study, Like we could just
get every neuroscientist or every expert in battery technology or whatever.

(47:18):
But there's fear of pissing off Trump. There's there's internal issues.

Speaker 4 (47:23):
So g you know, when they're building their model and
not for medicine or science and things like that, but
as he tries to understand his country. Basically everybody in
China lies, right, They don't tell the truth.

Speaker 2 (47:33):
It's not like you're going to say in a form
or anything.

Speaker 4 (47:36):
Everybody is like I love the government because they all
know if you don't fucking do that, you're screwed. And
so I think the AA model is taking it in
the same in Russia. So I their models have got
to be much more skewed and optimistic and positive towards
their leaderships, right, and and the the data that they
that that's that they build on has God, it has
to be skewed.

Speaker 2 (47:57):
Just although most of the big models now are trained
on essentially everything in the world that's been digital like
every because you couldn't it's like TikTok, right, our our
data would go into the China model, right, yeah, but
also all of read it and every academic paper ever
and every book that's ever been digitized, and because you

(48:17):
you just need more and more data. And I don't
think you could create a cutting edge model just on
Chinese data like they're just I would guess that would
be my strong guess. So you'd need all the models
have all the data. Basically now there is creating synthetic data.
I would guess that if we see this with Elon Musk,
because he'll tweak his algorithm at Rock and it suddenly

(48:40):
is like spouting Nazi stuff. And it's not great that
it's spouting Nazi stuff from a like I'm against Nazis standpoint,
but it's also a sign that ham handed like top
down impositions on the model makes the model do really
weird things. It's not a good way, So the models
aren't quite as controllable as other things might be inherently.

(49:01):
But don't you I wouldn't you assume every government a
really good model you could get for less than a
billion dollars?

Speaker 3 (49:08):
But those governed governments don't want the people to be
able to have access to all the But.

Speaker 2 (49:12):
Don't they think internally? Wouldn't they?

Speaker 4 (49:14):
But I think we're okay because the Buerau of labor statistics,
it's not like they're going to fire the person for
putting out the rule statistics that the government doesn't like.

Speaker 2 (49:23):
But from a like how many countries on Earth could
the governments could blow a billion dollars like on most
of them, right, yeah, big ones? And like how would
you know? Am I wrong? Like from a national security
because wouldn't you want your own? You wouldn't want like
we saw what happened to Ukraine using starlink and being
dependent on elon Musk. You wouldn't want to be dependent

(49:44):
on sam Altman or Google or any of the And
you saw how all the big tech companies meant the
need to Trump. And so if you're an adversary of
the US, or even if you're like Israel, like an
ally of the US, but with your own independent desires,
don't you think they're all building That's my assumption, they're
all building their own models and that intelligence services will

(50:05):
just have access, although are they. I'd be curious, like
what is to a buddy I know who's in the FBI,
And he was like, we have I just use commercial
tools because the FBI tools are so lame and they're
so behind the times.

Speaker 4 (50:17):
Like did you guys we couldn't even use Google right
on our computers because we couldn't mix outside and.

Speaker 2 (50:23):
Internal really occasions.

Speaker 4 (50:25):
Yeah, so eventually we figured it out, but yeah, we
had to switch between it, but yeah, we couldn't. Seventeen
year old kids sitting in his basement had more access
to information than we did. We had access to different information.

Speaker 2 (50:36):
And yet you created the crack epidemic. That's impressive without
any technology.

Speaker 3 (50:41):
Thank you.

Speaker 2 (50:42):
Well, it's the alien technology that we have reverse engineered.

Speaker 1 (50:46):
Now, So I'm gonna I'm gonna go ahead and thank
us all for getting a chance to get together again
and to promise our listeners, we have a lot of
great episodes we have now recorded and are recording, and
we'll be coming out in swinging weeks. And I promise
I'll get this YouTube channel up and running where you
can see video.

Speaker 2 (51:04):
This is going to be a very interesting fall and
I can't wait to talk to you guys next October.

Speaker 1 (51:09):
Yeah, Adam, come back anytime really. Mission Implausible is produced
by Adam Davidson, Jerry O'shay, John Cipher, and Jonathan Stern.

Speaker 2 (51:22):
The associate producer is Rachel Harner.

Speaker 1 (51:25):
Mission Implausible is a production of honorable mention and abominable
pictures for iHeart Podcasts.
Advertise With Us

Hosts And Creators

Adam Davidson

Adam Davidson

John Sipher

John Sipher

Jerry O'Shea

Jerry O'Shea

Popular Podcasts

Dateline NBC

Dateline NBC

Current and classic episodes, featuring compelling true-crime mysteries, powerful documentaries and in-depth investigations. Follow now to get the latest episodes of Dateline NBC completely free, or subscribe to Dateline Premium for ad-free listening and exclusive bonus content: DatelinePremium.com

My Favorite Murder with Karen Kilgariff and Georgia Hardstark

My Favorite Murder with Karen Kilgariff and Georgia Hardstark

My Favorite Murder is a true crime comedy podcast hosted by Karen Kilgariff and Georgia Hardstark. Each week, Karen and Georgia share compelling true crimes and hometown stories from friends and listeners. Since MFM launched in January of 2016, Karen and Georgia have shared their lifelong interest in true crime and have covered stories of infamous serial killers like the Night Stalker, mysterious cold cases, captivating cults, incredible survivor stories and important events from history like the Tulsa race massacre of 1921. My Favorite Murder is part of the Exactly Right podcast network that provides a platform for bold, creative voices to bring to life provocative, entertaining and relatable stories for audiences everywhere. The Exactly Right roster of podcasts covers a variety of topics including historic true crime, comedic interviews and news, science, pop culture and more. Podcasts on the network include Buried Bones with Kate Winkler Dawson and Paul Holes, That's Messed Up: An SVU Podcast, This Podcast Will Kill You, Bananas and more.

Music, radio and podcasts, all free. Listen online or download the iHeart App.

Connect

© 2025 iHeartMedia, Inc.