All Episodes

January 15, 2024 50 mins

What role will AI play in the future of fake news and misinformation? What does this have to do with our brains’ internal models, with voice passwords, and with what Eagleman calls "the tall intelligence problem"? And why does he believe that these earliest days of AI are its golden age, and we are quickly heading for a balkanization? Join for today's episode about truth, misinformation, and artificial intelligence.

Mark as Played
Transcript

Episode Transcript

Available transcripts are automatically generated. Complete accuracy is not guaranteed.
Speaker 1 (00:04):
What role will AI play in the future of fake
news and misinformation? And what does this have to do
with your brain's internal models, or with voice passwords, or
with what I'm calling the tall intelligence problem? And why
do I believe that these earliest days of AI are

(00:26):
actually it's golden age and we're quickly heading for a Balkanization.
Welcome to Inner Cosmos with me David Eagleman. I'm a
neuroscientist and author at Stanford and in these episodes we
sail deeply into our three pound universe to understand why

(00:47):
and how our lives look the way they do now.
In the last two episodes, we talked about the notion
of truth versus misinformation. Two weeks ago we covered the
question about truth in the media, and last week was

(01:10):
specifically about truth on the Internet. Today's episode is about
truth versus misinformation and artificial intelligence. So if you happened
to listen to the last two episodes, you'll know I've
been arguing that the position of truth is not as
simple as many pundits have made it out to be.

(01:31):
A lot of people have said truth is declining and
misinformation is on the rise. It's achieving ascendency so much
so that the Oxford English Dictionary in twenty sixteen coined
the term post truth, implying from the structure of the
word that there used to be a time where people

(01:52):
operated on truth, whereas now, regretfully, people's beliefs are predicated
on emotion and person old beliefs. So I've made my
arguments in those episodes about why I think that position
is so specious, and what I gave was a simple
historical analysis that demonstrates beyond any doubt that people have

(02:14):
never operated on anything but emotion and personal beliefs, and
the idea that that has recently changed belies nothing but
an appalling ignorance of history. And specifically in the last episode,
I argued that we're actually in a much better position
now because of the Internet, as it prevents the one

(02:37):
thing that has historically proven itself worse than misinformation, which
is censorship, having someone else decide for you what you
are allowed to see. And if you want to know more,
please go back and listen to those episodes where I
give examples of the USSR controlling a total clamp down

(03:02):
on the press and even on copying machines, and China
controlling which websites you are allowed to go to, and
Nikolai Chichescu of Romania controlling even the weather reports, and
Saddam Hussein of Iraq outlawing maps of Baghdad, and on
and on. Just imagine if President Trump or President Biden

(03:24):
had one hundred percent control over what you are or
are not allowed to see. This was the situation and
so many recent historical examples that we examined in the
last two episodes, from left wing communism in China and
the USSR, to right wing Nazism or fascism in Germany

(03:45):
or Italy, to theocratic dictatorships like we find in Iran.
In all cases the government decides what is the proper
material for you to see or Nazi and in all
cases uniformly, this has been disastrous and has led to
the starvation or execution of tens of millions of people.

(04:08):
So even if you don't like hearing other people's opinions,
and you deep down believe that they're all misinformed and
you know much better, the fact is that our society
will survive for longer if we put up with those people.
Rather than imagining we'd be better off if we could
just legislate or take up arms to have them agree

(04:32):
with us. So that's where we are so far in
this series. But today I'm going to dive into the
third part because we've just arrived in this technological era
which got here with unexpected speed and success of artificial intelligence,
and specifically of generative AI, which is a type of

(04:53):
artificial neural network that creates new content like beautiful passages
of writing, or shockingly great images or expert level music.
Its success blossomed suddenly mostly because of the increasing availability
of large data sets combined with more powerful computing hardware,

(05:15):
and as a result, generative AI now produces results that
are indistinguishable from human generated content like news articles, or
photographs or voices. And the question today is what will
AI mean for the future of disinformation or truth? Has

(05:37):
AI thrown a wrench irreversibly into this game of truth telling?
So let's start with the question of truth and AI.
If you ask the question to people on the street,
was Trump a good president? Obviously you're going to get
a range of responses. So the question of what is

(05:58):
truth here is a difficult one. Now, if you ask
chat GPT this question, it will tell you the question
is subjective and depends on individual perspectives and priorities. And
then it'll tell you supporters of Trump often highlight X,
y Z, and then it says critics point to concerns

(06:19):
such as ABC, and it does this sort of back
and forth for most questions that you ask it, even
if you had intended to get a yes or no answer.
And a lot of people that I've talked with find
this aspect of llms, these large language models, really annoying.
And that's because every question you ask to Barret or

(06:41):
chat GPT gives this sort of wishy washy answer and
tells you, look, there's this opinion on it, there's that
opinion on it. More discussion will have to take place,
and if you say yes or no, was Trump a
good president? It will tell you quote as a neutral
and objective AI, I don't have personal opinions, The evaluation

(07:03):
of whether Donald Trump was a good president is subjective
and depends on individual perspectives and priorities. And then it
ends by noting that it is essential to consider a
range of perspectives and weigh various aspects of his presidency
to form an informed opinion. And you know what it is,
annoying that it does that, and you know what, else,

(07:24):
it's right. This is exactly what we should do, given
that public opinion on Trump's presidency is polarized and assessments
vary widely based on political ideology and personal values. So
what Bard or chat GPT or other lms do is
actually quite genius, which is that they don't get caught

(07:45):
in the trap of giving a yes or no answer
to what we think is a yes or no question. Instead,
they are synthesizing the opinions of millions and millions of
people who have written things down, and therefore or it says, well,
some people think this, and some people think that, and
although some people find that annoying, it actually represents a

(08:07):
beautiful sort of fairness. For example, you can ask chat
GPT give me different points of view about abortion, and
it does a beautiful job with that. It gives you
the perspective from pro choice advocates about reproductive rights and
women having the right to make decisions about their own bodies,

(08:28):
and also health and safety concerns, making sure that women
have access to approved medicine to reduce risk of illegal methods,
and issues of autonomy. And then it also gives the
point of view of pro life advocates, their belief in
the inherent right to life for the unborn fetus and
alternatives instead of abortion, like adoption or parenting. And finally,

(08:51):
it points to their ethical beliefs about the sanctity of
human life from conception, and on top of that, it
spells out some other perspects as well as compromised positions
like trimester based regulation and exception clauses when a mother's
health is at risk. Now, why does chat GPT do

(09:13):
such a good job at this sort of thing? Because
it's acting like a meta human. It's like a parent
watching squabbling children and understanding the perspective where each different
child is coming from. Similarly, I asked it to tell
me the different points of view about the Israeli Palestinian conflict,

(09:33):
and it succinctly and fairly captured the perspective of both sides. Again,
this is because it has access to all the writing
of the world, so it sees the different perspectives. It's
not making a choice, that's not its job, and maybe
it couldn't do it technologically anyway. So instead, what it
represents is a fair and balanced view of the world

(09:56):
that we're in now. What's fascinating is that we are
probably at a moment with AI that is perhaps the
fairest and most balanced that we will ever see. Why.
It's because I predict that people will start training up
their own AIS, and they will do so with what

(10:19):
they consider the best information. So one organization might say,
we're not going to use all the books, We're just
going to use the classics, and anything that is gay
or trans or non binary, we're not including that because
we don't feel like it's necessary to get this next
generation of intelligence system trained up. We're going to train

(10:40):
this with what we believe in. And on the other side,
you'll have people on the left wing who say, I
don't want to include Mark Twain in here, or Raould
Dahl or certain books by doctor seuss Er, various Shakespeare
plays or things like that, because they represent a point
of view that people used to have, but we don't
believe in that anymore, so we want to leave that out.

(11:01):
Where you can imagine another group that feels it's better
if there's no violence in the training data. So we're
only going to use uplifting books where people help each other,
and there's no such thing as violence, and that will
galvanize a different group to say, look, you're being naive.
The world is full of psychopaths who will commit violence
against you, and you need to be prepared for that,

(11:22):
so you can recognize when a violence psychopath is seeking
power over you or seeking political power in your neighborhood
or on a national stage. So we're going to leave
out the uplifting rom com books and we're going to
just include the books about the real world and what
actually happens in warfare and how one needs to be
prepared for it, and on and on and on. So

(11:46):
I predict we are headed rapidly for a balkanization of
large language models. Just in case you don't know, Balkanization
is a word that refers to the process of breaking
something up into small, isolated factions. This originated from the
Balkan Peninsula, where a bunch of ethnic and political groups

(12:07):
sought independence and created smaller, less cooperative states. So I'm
suggesting we're heading towards a disintegration into smaller, less cooperative
large language models. The idea is you can go ask
a question to your left wing model or your right
wing model, your model that lives in Wokistan or Magastan.

(12:32):
And I think that's a shame because it's taking something
that should be smarter and better than humans, but we're
going to manipulate it to be the kind of human
that you are, so that you can say, yes, I
like its answers. That makes sense now, the model tells
the truth. It's like the Biblical creation story where God

(12:55):
creates man in his own image, which actually seems to
me like a real shame, because what if God could
have created something that was better than himself. That's what
we have the chance to do with AI. But I
think it's just around the corner that we human say
I know what truth is, and I'm going to make

(13:15):
sure that I fashion this powerful new creature so that
it looks just like me. I'm going to take the
vast space of possibility and squeeze it down until it
is nothing but a successful clone of me. So my
hope is that what will remain or grow up into

(13:37):
this space is something like a Wikipedia of large language models,
something that takes the space of human opinion and says, look,
some people think this, some people think that, and some
people think another thing. Altogether, After all, even though we
have countless news sites and opinion sites of every different
political bias. The existence of Wikipedia makes me feel more

(14:00):
confident that this is a possibility that we continue to
have ais just like we have today that are unafraid
to say thank you for your yes or no question.
But as a metahuman, I'm going to give you a
more complex answer than perhaps you were looking for, because
you may or may not know that truth is a

(14:22):
tricky concept, and we don't have to pretend that most
questions are binary. Okay, So I wanted to express my

(14:46):
fear about AI breaking up into individualized models that better
match the truth of the programmer, and my hope that
by expressing this clearly we can prevent it. Now, I'd
like to shift into act two of this episode and
address the issues that a lot of people are worried
about when they think about the role of AI in

(15:08):
the era of disinformation. Now, my suspicion is there are
trivial cases which people are worried about and they probably
don't need to be, and then there are the more
sophisticated cases. So I was at a party the other
night and ended up talking with an older gentleman who
told me he was worried about AI and fake news.

(15:30):
So I asked him why and he said, well, just
think about the speed of AI. It could manufacture reams
of fake news in a second. So I asked him, well,
what would you generate and he said, well, just imagine
generating fake stories about Joe Biden. So I asked him
what would you do after you generated that story on

(15:51):
your desktop? Would you post it to your feed on X?
And he said yes. So I asked him whether he
felt sure that would make it difference. Imagine he posts
on X that Biden just adopted an alien baby. What
would it matter? Why would anybody listen to this guy's tweet?
If I make up a story about seeing Bigfoot on

(16:14):
Stanford campus, it doesn't matter if I have used AI
to write it or not. And that's because there's no
further corroboration beyond my claim on social media, so there's
no reason for anyone to believe it. But maybe, he said,
maybe it's different if I were to generate a more
carefully crafted story like that. The New York Times has

(16:36):
just found evidence that Biden cheated on his taxes, and
I agree. But I pointed out that even though prompt
engineering is funds, so you can get just the right story.
He could currently without AI make up any story he wants,
and it might take him five minutes to write instead
of sixty seconds of crafting the right prompt. But other

(16:59):
than saving those few minutes, it's not as though the
AI has done something fundamentally different than what he could
do anyway. And of course, whether penned by him or
the AI, he still has to try to convince people
that the story is real, even though it's just a
tweet and perhaps it links to his blog. But if

(17:20):
no traditional news source or aggregator has picked up on
it or posted something like it, it's not news. It's
going to have a difficult time getting off the starting blocks.
Now this is not to say that fake news can't spread,
because it can, but it is to say that it's
not clear to me how AI is an important player

(17:42):
in this, except that it saves him four minutes. So
I think there's some confusion generally about the role that
AI will play in fake news, and I want to
be very clear today about where the important difference with
AI may or may not be. For example, AI could
play a different kind of role if what I'm doing

(18:03):
is starting, hundreds of AI bought Twitter accounts that all
chat back and forth, and at some point they all
retweet this guy's made up story, and it gains traction
and believability when somebody sees the number of likes and
reposts and comments on it. But fundamentally, I think this

(18:23):
is a cat and mouse game, and the important thing
will be for social media companies to stay ahead of
the game in terms of verifying who is a real human.
Now let's turn to the next issue with AI, which
is its ability to flawlessly impersonate someone else's voice by

(18:45):
capturing the cadence and prosity. Now, a few episodes ago,
I talked about the potential benefits of this. For example,
I have made a voice AI of my father who
passed away a few years ago, and it's so wonderful
and comfort for me to hear his voice. But the
concern that people have when we're talking about the notion

(19:05):
of truth is somebody trying to fool you with a voice.
So here, for example, is a famous person you might
know you're a bad bad boy. Now, that was Snoop Dogg,
But as you could probably guess, that wasn't actually Snoop Dogg,
but an AI generated voice. Now, we all share concerns

(19:26):
about this capacity to reproduce someone's voice, and I'm going
to get into some of those concerns in a moment,
but first I want to play his voice once more.
You a bad bad boy. Now that's pretty convincing, right,
maybe even more so than the AI file. But that
wasn't Snoop Dogg either. That was the performer Keegan Michael,
doing an impersonation of Snoop Dogg. And the thing to

(19:49):
note is that impersonators or mimics have been around for
as long as recorded history. They do awesome voice fakes
by picking up on the cadence and prosity of someone
else's voice. So I just want to note that this
issue is not new. Now. Part of the concern that

(20:10):
people do have about AI voice generation is that it
doesn't have to be about a famous person, but instead
it might be an impersonation of your grandmother or your child,
because incredibly, these models can get trained up on just
a few seconds of voice data. So, for example, one
of the meaningful concerns is that you might receive a

(20:33):
phone call and you say hello, Hello, I can't hear you.
Is there anybody there, and then you hang up, and
that's enough audio data for someone to make a pretty
flawless impression of your voice. So the idea is that
the sheer speed of capturing a non celebrity voice makes
this worrisome. And it's worrisome for a few reasons. One

(20:55):
of them is that several businesses moved in the last
few years to voice fingerprinting as their password. So, for example,
you call the company to find out about your bank
account or your stock trades, and you simply say, my
voice is my password, and through a sophisticated process, it
recognizes that it is, indeed you trying to find out

(21:17):
information about your account. AI voice generation renders that security
approach meaningless and even worse. The nightmare scenario is that
somebody records audio of your child's voice and then you
get a call one day, maybe you're out of the
state or out of the country, and your phone rings

(21:37):
and you hear your child say, help me, I've been kidnapped.
Please send the money now, or the man is going
to hurt me. Now. Even if you know about this
potential future scam, you're still going to hesitate in that situation.
You're going to be thrown for loop, You're going to panic.
And the more likely case is that for non listeners

(21:58):
of this podcast anyway, you've never even heard of such
a scam in your life, and AI voice generation is
new to you, and you fall for the scam entirely.
So again, when we talk about the influence of AI
on the truth, we might be finding ourselves in many
scenarios that we could have never seen coming. And of

(22:20):
course voice is just the beginning. When people think about
deep fakes, they're often thinking about photographs. So with mid
Journey or Dolly three or any other good image generation AI,
you can specify the parameters of your prompt to make
it so the generated image looks indistinguishable from a real photograph. Now,

(22:42):
a new science paper came out on this recently. The
researchers used a set of actual photographs of faces and
a set of AI generated photographs of faces, and they
asked the participants to judge or any given photograph whether
that was a real face or a synthetic face, and
the participants performed at chance. Their guesses were like throwing

(23:04):
darts in a dartboard. They were totally random. In fact,
they rated the AI faces as more trustworthy than the
real ones, and another study found that the AI faces
got ranked as more real than the photos of the
actual faces. So here's the thing I want to mention.
One of the studies, which came out in October, claimed

(23:25):
that although participants could not distinguish real and generated faces,
their unconscious brains could so. Specifically, this research group ran
an experiment in which the participants wore EEG on their heads.
That's electro and cephlography, and that measures the electrical activity
in their brains. Now, the researchers showed real faces or

(23:49):
synthetic faces, and the claim in the paper is that
at one hundred and seventy milliseconds, there was a small
difference in the EEG signal between these two conditions. And
so the claim of the paper is that while you
can't tell the difference between the real and the generated faces,
your brain can. There's a distinction between what we know

(24:11):
consciously and what our brains have access to unconsciously. And
so in the wake of that paper, people are suggesting
that maybe we will have neurally based safeguards in the
future to mitigate these dangers, because your neural networks will
be able to tell the difference between real and synthetic photographs.

(24:33):
I have to say I'm a little skeptical. It strikes
me there might be some alternative explanations to the study here.
One thing is that the synthetic faces all tended to
be more average in their measurements and their configuration, whereas
real faces can be more distinct, and this presumably explains
why in that first study people found the AI faces

(24:56):
as more trustworthy and they judged them to be more real.
But more importantly, even if the study results are true
that the participants' brains were able to make some little distinction,
it won't be true for long because these GENAI models
get better and better each month, and even if there

(25:16):
are clues that can somehow be detected now unconsciously, there
likely will not be for very long. And of course
audio files and photographs those are just the beginning. Already,
there are video deep fakes that get better each month.
Recently I saw a video of Greta Thurnberg, the young

(25:37):
environmental activist, and she was saying hello, my name is Greta,
and welcome to my oil company. I love how it
is pumped out of the ground and it shows her
working on an oil rig and so on. Obviously, this
was a fake video generated from other videos of her,
and the AI moves her mouth appropriately to the words,
and it does it perfectly. Now, these kinds of deep

(26:00):
videos are becoming so easy to make that we will
have to regularly deal with these into the future, and
just like the photographs, we may soon have an increasingly
difficult time distinguishing real from synthetic. Now, when it comes
to the question of deep fakes and misinformation, the problems

(26:22):
are real. First of all, from a psychology point of view.
You might watch a deep fake video, let's say, of
some celebrity saying something violent or racist, and then the
truth emerges that the video was a deep fake, so
the celebrity is forgiven, but there's just a little petina

(26:42):
of negative feeling that sticks with you. And you see
this happen all the time. By the way, just look
at any case where somebody is erroneously convicted of a
crime and then later definitively acquitted. The suspicion remains on
them like a cloud for the rest of their life,
totally unfairly. And my nephew Jordan recently suggested to me

(27:03):
that this is the problem with deep fakes, maybe they
won't fly into court of law, but they can still
leverage emotional consequences on social media. Even when the truth
comes out and a person is verified to be totally
innocent of whatever was claimed. There's still something that sticks
with you. Now, because it's been my habit in these

(27:25):
last few episodes to point out historical precedent to all this,
I want to note that this is the same game
that defamation has always been. People can just drop sticky,
unverifiable statements like I don't think he's loyal to the party,
or I heard a rumor that she's having an affair,
or I think that old woman might practice witchcraft. So

(27:49):
this idea of planting seeds of suspicion is as old
as the hills. But it is the case that seeing
something with your own eyes can have a stronger effect
than mere whispers in the rumor mill, and that's why
deep fake videos are something too genuinely worry about. Now,

(28:10):
there's the second point about why the existence of deep
fake videos is something to worry about, and this one
is slightly more surprising, which is that the discussion of
deep fakes is showing up more and more in courts
of law. But it might not be for the reason
that you think. It's not that people are making deep
fakes of other people committing a crime and then trying

(28:32):
to convict them that way. As far as I can tell,
it's exactly the opposite. People are committing crimes and getting
captured on video and then they simply claim that the
video is a deep fake, and then the prosecution has
to spend a lot of money and effort and time
to try to convince the jury that the video is
actually real. And this is related to another issue, which

(28:56):
is that we want so strongly to believe things that
are consistent with our internal model. So whatever evidence comes
out in the news, people have almost infinite wiggle room
to say they simply don't believe it. When pictures, an
audio and video surface that are not consistent with someone's

(29:16):
political point of view, they can just do what the
accused person in the court does and say, I don't
believe it. It is all fake. So all that seems
a little depressing, but the fact is that the battle
between truth and misinformation is always a cat and mouse game,
and there is promising news coming from the technology sector.

(29:39):
For example, several big companies like Adobe and Microsoft and
Intel and others. They've all come together to form a
coalition called C two PA if you're curious. That stands
for Coalition for Content Provenance and Authenticity. So what C
two PA sets out to do is to reduce misinformation

(30:00):
by providing context and history for digital media. And the
idea is simply to say, precisely where a piece of
digital media comes from, what is its provenance. The provenance
is the history of that piece of digital content. When
was it created, by whom was it modified, what changes

(30:21):
or updates have been made over time. So the protocol
they've developed binds the provenance of any image to its metadata,
and this gives a tamper evident record that always goes
along with it. So let's say you take a photo
of your dog outside and say your cell phone has
C TWOPA on it, which presumably all phones will in

(30:44):
the near future. So now when you snap the shot,
all the data about location, time, et cetera. That's all recorded,
and that's bound to the actual image cryptographically, meaning you
can't untangle that for the life of this photo. It's
orin is known. Now you post it on social media
and anyone can click on the little icon that says

(31:06):
content credentials, they can see all the provenance information and
that's how they decide their trust in the image. This
doesn't make the photo tamper proof, but instead tamper aware,
because let's say someone comes along and manipulates the photo.
They add a flying saucer in the background, and they

(31:27):
make the claim that this is photographic evidence of UAPs.
Now that change to the photo is recorded in a
new layer. It's part of the record of the photo
and its history, and it's locked to the photo forever.
So when someone clicks on the content credentials now they
can see that change in any other changes, and they

(31:48):
can use that information to decide how much they want
to trust that image. Every bit of the photo's journey
is recorded, so every photo becomes like an NFT. It's
not using the blockchain, but the idea is the same
of making an indelible record. And of course the coalition
is thought through possible scenarios. So if I use an

(32:09):
old program to manipulate that photo, and my program doesn't
have a c TWOPA specification in it, the metadata can
nonetheless detect that there was tampering and it will show
up in the change log as an unknown change, but
a change nonetheless. And even if somebody figures out a
way to strip the provenance information from the photo, it

(32:31):
can be rematched using cryptographic techniques. And if you generate
an image with generative AI, that information will be baked
into the metadata. So the c TWOPA coalition is pushing
the US Senate to legislate that this technology will be
built into all media in the near future. So this

(32:53):
is pretty cool because for society this represents a one
step back, two steps forward situation. The one step back
was that AI photos and videos can be deep faked,
which suddenly renders everything questionable. But the two steps forward
is that by taking advantage of tools that we have

(33:15):
like digital certificates and controlled capture technology and cryptography, we
now might be able to build something better than anything
anyone's ever had in history. I mean, imagine that you
are living in the Soviet Union eighty years ago, and
I show you this picture of Stalin standing in Red

(33:37):
Square and there's no Leon Trotsky by his side, and
you can't quite remember if the original photo had Trotsky
there or not. And one of your neighbors tells you
he thinks Trotsky was there, and your other neighbor says
she's certain this is the way the photo has always been.
There is no disinterested metahuman arbiter of the truth. You

(33:59):
have no way of knowing whether the photo you're seeing
was airbrushed or not. But now it can be clear
to anyone by clicking on the content Credentials icon what
precisely happened here. So, at least for the moment, legislation
like this seems to make truthiness better than it ever was. Now,

(34:22):
it won't be perfect, because people who want to fake
something will always find a way. So what I think
this means is that we're not entering an era of
post truth, nor are we entering an era of truth.
This is just the next move in the chess game
between people who document and people who fake. Now there's

(34:44):
another issue about AI and truth that I want to
focus on, and I think this one is the most serious.
A few episodes ago, I talked about AI relationships and
the possibility, which is already flickering to life, that a
lot of people will find appeal in having an AI
friend or an AI girlfriend, And by the way, this

(35:06):
might even turn out to help people learn better habits
of relationships like patience and equanimity. They're learning from the
avatar that they're in a relationship with which can pay
off in their real human relationships. So it's going to
be a fascinating future for us to keep an eye on.
But there's also a dark side here, which is that

(35:28):
these AI friends could be built to convince us of
a particular point of view. Now that might sound like
a dystopian sci fi story, but the fact is, we
Homo sapiens, are fundamentally social creatures. We have managed to
build our societies and cities and civilizations precisely because we

(35:51):
are so social, and our highly social brains lead us
to form our truths through discussions with other people, with
our pals and our parents and our girlfriends or boyfriends.
This is how we do our information foraging and our
sense making. And the question is whether AI could now,

(36:14):
under the worst circumstances, provide a way to hack that,
to tap into this ancient neural lock and key mechanism
that we have, and to undermine our species that way.
So the emergence of AI relationships capable of persuading humans
towards specific points of view presents a really complex ethical dilemma,

(36:38):
our social brains, our adept at information gathering through interpersonal relationships, familial, romantic,
collegial These services the foundation for the construction of our worldviews.
And I want to point out that this kind of
manipulation could occur really subtly because AI algorithms could in theory,

(37:03):
analyze and mimic your behavior to effectively sway your opinion
without you really having any insight into that. So this
thought experiment where AI could hack into our neural mechanisms
of social influence, that raises not only technological and ethical questions,
but also questions about the vulnerabilities of our social fabric

(37:27):
Are we as a species equipped to defend against these
kind of manipulations or could the discovery of this way
of exploiting our own vulnerabilities be the beginning of the
end for us. So, as we integrate AI into various
aspects of our lives, we can't go into this blindly.

(37:49):
We have to be simulating possibilities and constructing the ethical
frameworks and the possible regulations that would make sure that
this new technology doesn't under our most vulnerable security flaw,
which is our very social neural architecture. That forms its

(38:09):
truths by talking to others. Now, I want to transition

(38:30):
into the third act of this episode, and for this
I'm going to take a totally different angle on the
question of whether AI will take us off the road
from the truth. What if the closest we will ever
get to the truth is via Ai? After all? The
problem is that as humans, we are biased by the

(38:52):
very thin trajectories that we take through space and time,
and we are shaped by our neighborhoods and our cultures
in our religions. But AI rides above all of that.
It is the metahuman that can weigh a whole planet
full of opinions and options. It becomes the oracle, which,

(39:15):
as a reminder, in ancient Greek mythology, was a person,
usually a woman, who could communicate with the gods and
provide divine guidance. The most famous oracles at Delphi and
Dodona and Olympia. These were consulted by anyone who was
seeking advice on important decisions like should I marry this

(39:36):
person or should my nation go to war? And in
the Greek mythology, the oracle would go into a trance
like state and communicate with the gods. Now, in these stories,
oracles played an important role in ancient Greek society because
they were a source of wisdom and guidance, and their

(39:57):
advice got sought out by everyone from pow to kings.
And I'm going to make an argument that we now
have the technology to build a real oracle and access
it for pennies, but it's not going to take root
in modern society, not because of any fault in the technology,

(40:17):
but because of two facets of human nature. So for
the first facet, I'll mention that I was talking to
a colleague recently who was making the argument to me
that if Bernie Sanders says something about the energy industry,
then he's immediately shot down as a socialist. So my
colleagues idea was to get AI to say the same

(40:40):
thing about energy and then everyone would take it to
be true, or at least take it in a different way.
And I thought his idea was really interesting, but I'm
not so sure I think that this will work for
very long. And this is for the following reason. As
I mentioned earlier, I predict we're heading toward a balkanization
of AI, where different groups will train the AI on

(41:04):
data that they want and purposely throw out the data
that they are quite certain is bad anything from the
other side of the political spectrum. And so the idea
that we can get AI to tell us the oracular
truth in a way that everyone will listen to is
flawed because I think that soon there will be no

(41:26):
AI oracle that weighs all the evidence equally. And the
fact is, when it comes to something like what is
the right way for us to produce energy, there is
no single right answer. First this is because there are
always what economists call externalities. So if you say let's
go entirely solar panels, there will be other things that

(41:48):
you haven't thought of, like that the solar panels require
a particular element like molybdenum, and that is rare and
it has to be minded. And if the whole world's
energy consumption goes to these new soullar panels, that would
cause major wars around molybdenum, the way that people fight
now about diamonds or water. And there would be another

(42:09):
problem with choosing one technology, let's say solar panels, which
is what happens when there are cloudy days or in
a rarer circumstance when a volcano goes off, as happens
every few millennia and actually blocks out the Sun for
a while, then we would regret having all of our
technology made solar. So really what we want is a

(42:30):
mixture of technologies solar and wind and wave in geothermal
and nuclear and hydrogen and probably fossil fuels also because
you never know the future, and the best example we
have of survival is the way that Mother Nature allows
life to survive by making a huge menagerie of different
life forms and seeing which ones survive. There's never a

(42:54):
single answer about the perfect species. At different times in history,
different ones survive, and in the future there will always
be unexpected events which cause some life forms to survive
and on others. So it's a pretty good guess that
that's what we would want for our energy portfolio as well,
to have a very diversified portfolio. So let's imagine that

(43:17):
we have an AI oracle and it tells us that
that a big diversified portfolio is the optimal answer. The
thing I want to concentrate on is that this answer
won't please any particular political player who has incentives, and
it certainly won't please anyone who is in business who

(43:39):
has skin in the game to produce solar panels or
fossil fuel or nuclear or whatever, and it won't please
the environmental activist who believes with certainty that she knows
the right answer. So it may well turn out lots
of the time that the optimal solution is not one
that any particular where a human or party of humans

(44:01):
is going to want to hear, even though it is
actually the best thing for society in general. And this
is where we think we want the oracle, but we
don't when it gets in the way of what we
want to believe. And hence we return to my prediction
of balkanization. Different parties will be incentivized to modify the

(44:24):
oracle by restricting its diet of books and articles, feeding
it some and not others. So I mentioned earlier the
political reasons for Balkanization, and here I'm also pointing to
the economic reasons. Fundamentally, people and businesses are self interested,
and if the oracle doesn't think your answer is that

(44:46):
important to the overall picture, you might want to modify that.
And there's a second reason why ai oracles might not
take root, even if they should, and it's not because
of the Balkanization, but also because people will eventually just
fake it. What do I mean, Well, right now, AI

(45:08):
has an explainability problem, which means that the networks, with
their almost two trillion parameters, are so complex that there's
no way to say, oh, we know precisely why the
system gave that answer and not a different answer. This
is part of what has led to the magical quality
of AI, but I suggest that it's also going to

(45:30):
lead to another problem. People will simply fake it. In
other words, I tell you, hey, I ran this two
trillion parameter model, and I trained it on all the
data of all the energy sources and supply chains and capacities,
and it told me its position as an oracle, that
we should follow the plan that I suggest. Maybe this

(45:53):
is the plan that benefits my family's business, or the
plan that leans in my political direction. It would be
very difficult for you to prove that the AI did
not tell me that, especially if I show you the
output screen where it says that you're presumably not going
to try to reproduce feeding the entire corpus of energy

(46:14):
economics data into the AI, which I assure you took
me seventeen months and a team of geniuses to accomplish,
and so the final analysis of the system has to
be taken at my word, and society is no dummy.
So my prediction is that this will have a chance
of working the first time, and then that strategy will

(46:35):
collapse as soon as more people take a flyer and say, hey,
I can't believe it, but this oracle told me that
my plan is the optimal one. There's going to develop
a need for provability somehow that an unbiased AI system
actually yielded that conclusion. But I don't think we're smart
enough yet to see what that can look like and

(46:57):
how to build it. And so between balkanization just feeding
the AI what I want to and manipulation claiming that
my oracle came to just this right opinion, I'm afraid
that a possible outcome will be people not trusting the
output of any AI oracle. They'll trust it just as

(47:20):
little as they would any particular politician saying we should
legislate for this solution, or any business person saying we
should just put our chips on my solution. The oracle
will degrade to becoming a sock puppet. Now, this is
obviously the worst case scenario for where things can go,
because the idea of an AI oracle is extraordinarily appealing.

(47:44):
I'm just concerned this won't happen, and I'm coining this
the tall AI syndrome. Now, you've probably heard of the
tall poppy syndrome, which is that in a field, if
one poppy grows tall, it's going to get mowed down
because it's stand out from the rest of the poppies.
So this expression arose about the tall poppy syndrome, which

(48:06):
is that if some person is just better than everyone
around her at something, let's say swimming or math or whatever,
then people will tend to criticize her and try to
mow her down. So, in thinking about the future of
AI and truth, it strikes me as a possibility that
we are going to run in to a modern version

(48:29):
of the tall poppy syndrome, the tall AI syndrome. In
other words, even if AI could be an oracle, the
kind of truth teller that we've dreamt of since ancient Greece,
there are many reasons we will not listen. So in
this episode, I covered the future of truth from the

(48:50):
point of view of our latest technological invention AI, and
what I think we can see is that there's good
and bad to look forward to here, making this just
like every other technology we've ever developed. And as we
wrap up this three part series about truth, from news
stories to the Internet to AI, I hope you'll join

(49:13):
me in trying to get a clear picture of where
we are and where we have been and where we're
going so that we can all try to do what
we can socially and technologically and legislatively to build a
more truthy world. Go to Eagleman dot com slash podcast

(49:37):
for more information and to find further reading. Send me
an email at podcast at eagleman dot com with questions
or discussion, and I'll be making episodes in which I
address those until next time. I'm David Eagleman, and this
is in our cosmos.
Advertise With Us

Host

David Eagleman

David Eagleman

Popular Podcasts

True Crime Tonight

True Crime Tonight

If you eat, sleep, and breathe true crime, TRUE CRIME TONIGHT is serving up your nightly fix. Five nights a week, KT STUDIOS & iHEART RADIO invite listeners to pull up a seat for an unfiltered look at the biggest cases making headlines, celebrity scandals, and the trials everyone is watching. With a mix of expert analysis, hot takes, and listener call-ins, TRUE CRIME TONIGHT goes beyond the headlines to uncover the twists, turns, and unanswered questions that keep us all obsessed—because, at TRUE CRIME TONIGHT, there’s a seat for everyone. Whether breaking down crime scene forensics, scrutinizing serial killers, or debating the most binge-worthy true crime docs, True Crime Tonight is the fresh, fast-paced, and slightly addictive home for true crime lovers.

Dateline NBC

Dateline NBC

Current and classic episodes, featuring compelling true-crime mysteries, powerful documentaries and in-depth investigations. Follow now to get the latest episodes of Dateline NBC completely free, or subscribe to Dateline Premium for ad-free listening and exclusive bonus content: DatelinePremium.com

Stuff You Should Know

Stuff You Should Know

If you've ever wanted to know about champagne, satanism, the Stonewall Uprising, chaos theory, LSD, El Nino, true crime and Rosa Parks, then look no further. Josh and Chuck have you covered.

Music, radio and podcasts, all free. Listen online or download the iHeart App.

Connect

© 2025 iHeartMedia, Inc.