Episode Transcript
Available transcripts are automatically generated. Complete accuracy is not guaranteed.
Speaker 1 (00:11):
Welcome to the
IdeaGen Global Future Summit
here.
Live at the NED.
I'm honored to be here with agood friend and leader on so
many issues that we're about totalk about, dr David Bray.
Welcome, sir, glad to be herewith you, george.
You've been often brought inI'll say quietly, dr Bray to
(00:38):
help global leaders in times ofhigh state crises.
These include scenariosinvolving blackmail and
disinformation.
How can organizations andgovernments build resilience
(01:00):
when synthetic media, whensynthetic media, generative AI
and, as we heard, agentic AI,etc.
Are being weaponized againstthem?
Speaker 2 (01:12):
So I think it's first
important to sort of take a
view of the forest and thenwe'll get to the trees, and so
the forest perspective is.
Over the last 20, 25 years, wehave succeeded in rolling out
technologies, including theinternet, smartphones,
commercial apps, but alsotechnologies in space, and now
with AI and what's possible withgenerative AI that basically,
(01:36):
if you have a smartphone and Iwould submit that there's now 2
billion people out of 8.2billion people on the planet
that have a smartphone you havea mini version of the CIA or the
KGB circa the early 1980s inyour pocket.
So we've got 2 million peopleon the planet that are capable
of doing what the CIA and what Imean by that is.
With your smartphone, you cancall anybody in the moment's
notice around the world,hopefully with permission.
(01:58):
You can track assets using airtags or other means and if you
download the right apps, you canactually get commercial
satellite imagery as recent as15 minutes ago at 0.25 meter
resolution.
I guarantee you that PresidentReagan, president Bush 41,
president Clinton and PresidentBush 43 would have loved to have
had your smartphone as part ofthe Situation Room in the
Executive Office of thePresident.
Incredible, and that's just thelast 25 years.
(02:19):
So when companies are trying tothink about how they navigate,
there's obviously hugeopportunities, because the next
one billion people on the planetthat are going to get this
technology are going to get itfor less than a hundred bucks,
and that creates opportunities.
But it also creates risk,because now, with gender-trivial
AI, we have succeeded inpassing some aspects of the
Turing test, which means we cancreate realistic text, realistic
audio, realistic video thatlooks authentic and is
(02:42):
completely synthetic.
It's only about 43 seconds of aaudio clip that I can create
something that makes it soundlike you said something when in
fact you didn't.
And that's going to beparticularly hard for free
societies, because freesocieties we don't police what
people think.
I don't want to become adictatorship where either it's a
government or a companyarbitrating what is truth and
the reality is oftentimes, asthings are emerging, getting to
(03:04):
what actually is truth is hard.
On top of that, there's areason when you go to court,
they say you want to tell thetruth, the whole truth and
nothing but the truth, becausethose are three different things
and sometimes, whether it's anadvertisement or rhetoric or
things like that, individualsmight do two of the three things
.
It's not exactly a lie, but Ididn't tell you everything I had
to give up to get the win thatI got for whatever or that
energy drink.
(03:24):
Yeah, it's going to be great,but is it going to really make
you climb a mountain?
So this is really interestingbecause we are in an
unprecedented, unplannedexperiment in which we have
super empowered people withmassive capabilities and, at the
same time, free societies arekind of unprepared.
And again, I don't want tobecome a hypocrisy where I say
this is the version of truth.
(03:44):
If you don't like it, you'refired, imprisoned and or killed.
So what do companies do?
I think we're now at the pointwhere we need to see the rise of
services that will say if Ihave taken the time to confirm
that this video, this audio clip, really is me or my brand or my
company, then I have assertedthat Everything else is kind of
(04:06):
like caveat inter buyer beware.
We almost seem to be caveatinternet on the internet, which
is you know anything you see onthe internet.
Until you've taken the time totriangulate it, it might
actually be untrue.
And it's still worth knowingthat, even though we have super
empowered people with thecapabilities of CI and KGB, the
agency still hires 15,000 plusanalysts that take three to four
weeks to research an issue, towrite it up for the president's
(04:26):
daily brief, and even then theygive confidence intervals and
sometimes they're wrong.
How many of us, when we goonline, whether it's to search
or talk to an AI, have three tofour weeks and 15,000 analysts
to wait for a result before wemake a conclusion?
And so I think, for businesses,you're going to have to have
empathy for how this isimpacting your customers and
your clients, and you'reactually going to have to have a
(04:47):
candid board conversation whichsays when a attack of
questionable information qualityhits us, what are we going to
do?
It's not a question of if it'swhen that happens, because
whether it's entities that wantto short your stock and hit you
with that, whether it's NorthKoreans that want to impersonate
applying for a job because theywant to get access to your
(05:08):
networks, which we know ishappening, these things are
happening on a daily basis andit's going to be particularly
hard for free society, but it'scoming for plan and again, it's
really almost like we have tosay we will help our customers
or our clients have better waysof triangulating what is better
quality information.
We're not going to tell themwhat to say, because that's
dictatorial, and we are going tofind ways to assert certain
(05:29):
things as being from us,legitimately.
Everything else being done, areyou?
Speaker 1 (05:32):
saying having sort of
a badge of some sort or some
way to say, okay, this is aMicrosoft video or any piece of
content, and you see thatcertification badge and it's
theirs and you can verify itjust by looking at it.
And anything that doesn't havethat perhaps may not be, just
don't trust it.
Speaker 2 (05:53):
And I don't want it
from a major platform, because a
major platform in the UnitedStates has Section 230 concerns,
which is the moment they startarbitrating that then they lose
all things.
But yes, it could be acoalition, it could be a company
, it could be an alliance thatactually comes out and says if
we put this stamp on this, weare assuring that it really came
from us.
Speaker 1 (06:11):
Do you see that also
extending to individuals?
Yes, 100%.
Speaker 2 (06:14):
In fact, I would say
it has to extend to individuals.
If you only allow companies andgovernments to say it, then
individuals lose their voice,their voice.
What right now is a problem is,since 2013, the ratio of bot
traffic to human traffic.
There's been more bots onlinesince 2013 onwards.
There's a university here intown in DC, where my name was,
(06:35):
that tracks bot activity onsocial media.
Per their analysis, on a majorsocial media platform, for more
than half the world's countries,the top 300 most followed
accounts are all bot driven, noteven real.
Yeah, so we go to social mediathinking it's social Now again.
Some of that might just bewhether it's an artist, musician
or a politician has decided toautomate their account, but
that's exactly what a bad actorwould do, too, is make it look
(06:57):
legitimate and then occasionallyput those things in there.
Speaker 1 (07:00):
I want to digress for
a second because it's something
I look at.
Just think you know we're inWashington DC today and you
think about some piece.
I saw a piece that I tell a lotof folks that I saw, which was
a VO3.
And I don't know if you saw thefake auto show piece right,
(07:20):
piece right and I showed it to acolleague, uh who, a little bit
older than I, uh, maybe youknow it.
He, he said what after, afteryou watched it, I said, well,
none of that was real.
You can see the potential ofsomething like that causing
(07:41):
disruption in the community, butmaybe a heck of a lot more
globally.
How do you determine that inreal time?
I'm assuming that there arefolks looking at things like
this that are on a moment'snotice and saying, okay, let's
take this down, but what ifthere's so much of it at some
point that you can't?
Speaker 2 (08:05):
So full disclosure.
I just came back I'm a littlebit jet-lagged from a NATO
exercise and this was inCopenhagen in which part of the
exercise was going to be a realtime attack against one or more
NATO countries, and so the first12 to 16 hours actually was
pure cognitive, in which,instead of the narrative being
(08:26):
we are under attack from theadversary of NATO, the adversary
is actually saying no, no, no,you got it wrong.
This is actually something thatthe US is leading, and they're
using NATO as a cover for thefact that they actually want to
take over us.
There's videos, and there'svideos of world leaders meeting
in collusion with this, and wehave put these videos on there.
The public needs to be aware.
But this is not what you think.
(08:47):
This is actually.
You know we are the victims here, and that's the first 16 hours,
and then after you that then,if you didn't harden the
communications, communicationswent down, electronic warfare
happened, cyber attacks happenedand in the midst of all that
part of the situation was therewas these underground groups
that were starting to do cyberattacks because they were saying
they wanted to create awarenessof the fact that the world is
not aware of the true narrativethat's going on.
(09:09):
That was just a small example.
So, yes, it's very real thatthis could happen.
Again, the way that we, as freesocieties, need to adapt to
this we can't censor things.
If we start censoring andtaking down things and saying
this is the only narrative, thenwe are on a slippery slope to
autocracy as a thought,autocracy as a thought and gun
autocracy as a reality.
What we have to do is justencourage healthy skepticism.
Whatever you see, until you'vetriangulated from multiple
(09:31):
sources, assume it's bunk.
Speaker 1 (09:34):
We see that on a
smaller scale.
In Washington this happened andthen you realize the media
tries to stay ahead of certain,whatever the issue is Right, a
little bit different, but youcould see that with
disinformation campaigns andbecause we're free and some
other nations are not, we'resort of at a disadvantage on
some level 100%.
Speaker 2 (09:53):
And actually I want
to give a shout-out to Ellen
McCarthy.
She was a retired navalintelligence officer.
She was head of INR twoministrations ago, and one of
the things that she has beenadvancing, that I've been
supporting, is this idea that weneed to give people agency and
tools for themselves todetermine information quality.
And again, none of us are goingto say you can't have tabloids.
If you want to have tabloids,that's great.
(10:13):
And the reality is oftentimes,when a crisis is happening,
drips and drabs of informationare coming in and we may later
find out that wasn't the fullstory.
It wasn't the full narrative.
Sure, the sad thing is about 15years ago, when a school
happened, when a tragic schoolshooting happened, within the
first day, there was about fourfalse narratives, conspiracy
theories that came out.
Now a lot of those mightactually just be people that are
(10:35):
trying to get a sense ofcontrol over a sad, tragic event
conspiracy theory.
Now, sadly, when those sameevents happen within the first
hour, there's more than 50conspiracy theories online and a
lot of those are propagated andthings like that.
And so with anything, I'm justtrying to tell people unplug,
take a breath.
You know, the reality is wewon't know yet.
Speaker 1 (10:55):
This is unfolding and
everything like that, and this
is maybe, not maybe as I heard.
Long ago here in Washington, Iwas in a meeting and somebody
expressed a concern aboutsomething taking place in a
foreign country or whatever.
And somebody expressed aconcern about something taking
place in a foreign country orwhatever.
The person said don't worry, dowhat you can do within your own
sphere of influence.
Yes, and trust that there areothers, perhaps at a different
pay grade, different place, thatare tasked with this.
(11:16):
We have a limited ability.
If I get a notice on my phonethat says there's an attack, I
remember 9-11.
Right, I was right in the city,not far from here actually, and
the radio was on and themisinformation just in real time
was there's a bombing at theState Department, there's all of
(11:40):
these things and you know.
Now we know sort of whathappened, but at the time it was
complete chaos.
Speaker 2 (11:47):
If I can give you
three examples of how quickly
the world has changed.
In a past life, I joined whatwas called the Counter-Bio-Test
Program.
It was all of 30 people and weexisted.
This was November of 2000,because the US had been paying
certain countries to disarmtheir nuclear weapons and we
discovered oh, they also havebeen doing weaponizing anthrax
(12:08):
and smallpox and things likethat.
So we existed for that reason,if you remember, in February of
2001, the Agile DevelopmentManifesto came out, so doing
Agile Development versusWaterfall.
And I was in charge of thetechnology response.
So I adopted Agile Developmentand I was getting massive
pushback.
They were saying follow thefive year enterprise
architecture, follow thethree-year budgeting cycle.
How come you're not spellingyour requirements up front?
(12:29):
You know you have to spell allyour requirements and I
literally sent an email in Juneof 2001 saying we do not have a
deal with bad actors or mothernature not to strike until we
have our IT systems online.
So it was scheduled weeks inadvance for me to give a
briefing to the CI and the FBIas to what we would do
technology-wise should a bad dayhappen.
That briefing just happened tobe scheduled for nine o'clock on
September 11th 2001.
834, world changes.
(12:50):
We physically carry servers,because that's information
sharing at the time is you'recarrying the hardware.
Set up underground bunker.
Fly people to New York and DC.
Don't sleep for three weeks.
Stand down from hell.
On October 1st I end upbriefing the CIA's interagency
committee on terrorism.
On October 3rd.
First case of anthrax shows upin Florida October 4th, followed
by the threat letters onCapitol Hill and things like
that.
Had we not done agiledevelopment, we would have had
(13:11):
to handle 3 millionenvironmental samples and
300,000 clinical samples by faxas opposed to electronically.
Even more importantly, therewere plenty of conspiracy
theories that the US had donethis to themselves.
Everywhere across the countrywas supposedly a target.
It's interesting when anthraxhappened, the entire country
wanted to get tested, everywhere, from South Dakota to
California.
The number one place per capitawas actually Hollywood.
(13:33):
Draw into it what you will, butthat was 2001.
Now let's fast forward to 2009.
In 2009, I'm on the ground inAfghanistan.
I get to grow a beard, I get togo outside the wire not in
uniform, not active service.
And while I was there there wasunfortunately an event that
happened in western Perak inwhich the Taliban had determined
(13:53):
that the local governor was onthe take, was on a bribe, and
they showed up before thegovernor took his bribe and said
you either pay us or we'll killyou.
So local Afghans do a logicalthing they pay the Taliban.
The governor finds out, he getsupset, he calls in NATO and US
forces saying Taliban's in thearea, us fighter jets fly
overhead.
They see there's innocentcivilians on the ground, they
(14:15):
fly away.
What happened was the Talibanhad taken a photo of the fighter
jet flying overhead.
Then they made a propane tankand had time, stamped both of
those photos and went on socialmedia saying US airstrike kills
innocent Afghans.
So of course the Department ofDefense says they're saying
we're investigating.
That's true, but of course themedia cycle is firing out of
control and everything like that.
It took about three and a halffour weeks before the Department
(14:38):
of Defense finally figured outwhat happened, but that time the
news bubble had moved on andthe US ambassador had actually
apologized for what washappening, even though in the
end of the day, it was Talibancommitted.
They had just tried toconstruct a scenario.
They had actually used our ownOODA loop, our own cycle of
decision making, against us as away to win favor and their
support.
That was 2009.
And remember, smartphones hadjust come out about a year and a
(14:59):
half ago.
So fast forward.
Now to 2017.
I had parachuted into a role tohelp out the FCC.
They'd had nine CEOs in eightyears and I arrived in 2013.
I couldn't say at the time, butthey also had two advanced,
persistent threats fromnation-state actors into our IT
systems.
Part of the goal was to moveeverything to a better place.
We moved everything to publiccloud and private hosting One to
(15:20):
save taxpayer money, but alsobecause we just couldn't trust
the IT systems.
There was a high profile publicproceeding here in the United
States.
For those not familiar withwhat public proceedings are,
it's a chance to raise novellegal issues that the agency
must answer before it makes adecision.
It's not a vote, it's not anopinion test.
You know that's not what it isat all.
It's a chance to raise novellegal issues, and most
(15:40):
government agencies receive lessthan 10,000 comments.
Over 120 days, we were seeing7,000-8,000 comments a minute.
At 4 am, 5 am, 6 am, us EasternTime.
Now we had already sort of goneand asked General Counsel,
could we block bots?
And they said no, because ifsomeone can't see and can't hear
(16:01):
, they may not be able to file acomment.
That's a violation of theAdministrative Procedure Act of
1946.
So no capture for you.
Can I use invisible means?
No, that looks likesurveillance.
Can't use invisible means todetect bots.
Can I at least block the sameinternet protocol address, ip
address filing 100 comments aminute?
No, because one of thosehundred comments might be real.
So we ended up spinning up3,000 times our capacity.
(16:21):
Fortunately we had moved tocloud so we could do that.
We were up 99.4% of the time.
But the chairman's office camein and said is this a denial of
service?
I said not.
At the network layer Nothing'sbeen compromised.
The database is fine, we'regetting the comments.
But effectively, given therules of engagement at the
application layer, yes, it'sblocking actual humans from
leaving comments.
Well, I didn't know at the time.
But immediately certain partsof Congress said well, where's
(16:43):
your evidence?
I said well, where's yourevidence?
I said patterns of life 7,000,8,000 comments a minute, when
most government agencies getless than 10,000 over 120 days,
they said that's not forensics.
Speaker 1 (16:53):
I said I didn't think
I needed forensics.
Speaker 2 (16:54):
They said why didn't
you report it to law enforcement
?
I was like no law's got broken.
This is just.
I'm drinking from the fire hosebecause that's what I got to do
.
Well, it took about four years,but at the end of four years
the New York Attorney Generalconcluded of the 23 million
comments we got, at least 18million were politically
manufactured Nine million fromone side of the aisle, nine
(17:15):
million from the same side ofthe aisle.
So at least they were balanced,but that was the state of the
art in 2017.
So, getting back to yourquestion about AI, this is where
I tell people recognize AI hascontinued to make advances.
The ability to create syntheticcontent that looks very
realistic is going to onlycontinue to excel.
The only way we get back to itis healthy skepticism and
asserting with some degree ofconfidence.
(17:37):
This really came from me.
Everything else assume it'sbunk until proven otherwise,
because by 2030, some estimatessay about 40% of the data on the
planet will have beensynthetically produced.
Speaker 1 (17:48):
Or it's fake.
Questionable quality.
I mean, there are values.
Speaker 2 (17:52):
You can use synthetic
data, for example, if I want to
share information but not givemy specifics Right, so I can say
, for example, I'm over the ageof 21, but I never gave you my
date of birth.
But it is manufactured, and sothat does raise interesting
questions, especially for freesocieties.
How do you make any decisionwhen you're not even sure that
the data you're basing yourdecision on?
This is where I tell people.
In some respects we all have tobecome mini versions of the
(18:15):
intelligence agencies.
Which is healthy skepticismtriangulate from multiple
sources, know the lineage andpedigree of where something came
from, know your sources and thedegree in which you trust them.
It's almost like we all have tobe mini versions of the cia.
Speaker 1 (18:26):
yeah and, and so open
ai, chat, gpt perplexity.
You know all these otherplatforms, croc, if you're using
chat gpt and this is an asidequestion, but it's a, yeah sure,
I think a burning one andyou're asking it a question
about whatever it may be, whowon the super bowl over the past
30 years, and it doesn't tellyou where the content is coming
(18:50):
from.
Right, how do you?
What is, what is the?
What is the usefulness of thetool if you're not assured
there's no stamp that says thisis where the content came from?
Is that the mini version of theCIA that has to overlay on top
of that?
Speaker 2 (19:05):
You know, I think
what will end up happen is
because most of us don't havethe time and bandwidth for
things that really matter.
We may actually subscribe to aprivate service that actually
does the vetting for things thatare important.
Speaker 1 (19:14):
So there will be
another service on top of OpenAI
.
Speaker 2 (19:17):
Or maybe OpenAI will
provide it.
Here's what I would say.
So generative AI does lots ofgreat things.
Generative AI actually datesback to the late 1980s, early
1990s.
The only reason why we can dogenerative AI is because of two
things.
One, the compute power isfinally here.
We didn't have it in the 1980s,1990s, but then, two, the other
reason why we couldn't do it iswe didn't have the data.
The reason why we have the datais because we've all used the
(19:39):
internet for the last 30 years,because we produced that data,
which means the good, the badand the questionable.
So generative AI is, in somerespects, just throwing a whole
lot of data at the wall, hopingto find patterns that have
coherence, but when you ask thatprompt, if the data doesn't
actually exist there, it maygive you something that looks
very realistic, but it'scompletely synthetic, and that's
(20:00):
where you see, for example,where lawyers have asked
questions of generative AI.
It cites, course, cases thathave no existence whatsoever.
Or you ask for a scientificreference, and it looks like he
came from nature, but thatarticle doesn't exist, and so
that's where you have to saytrust, but verify.
The other thing that I wouldalso say, though, is there have
been multiple flavors of AI, andso we shouldn't assume that
generative AI is the only oneout there, and so I'm a big
(20:22):
proponent of what's calledactive inference.
And what is active inference?
Essentially it is probably thebest way is, if you have a
four-year-old or athree-year-old and you give them
an object, they'll drop it onthe floor.
You give them another object,they drop it on the floor,
whereas generative AI would takeabout a million attempts before
it learns that when I have anobject on it, it goes false.
I guarantee you that a three orfour year old, after dropping
attempt number five or six, islike yep, I let something go, it
(20:43):
go.
And the helium balloon, ofcourse we know, rises instead of
falls.
Generative AI would go what?
Whereas active inference wouldsay I don't know why, but there
is now a new class of object inthe world that, instead of
falling, rises, and so I wouldsubmit that the future of AI
that is much more positive tofree societies is not singular,
(21:06):
central, monolithic AI platformas a service.
It's a million, if not abillion, locally optimizing AIs
that might be helping with yourcalendar, helping with my
calendar, might be trying tofigure out what's the status of
shipping in this US canalrelative to pricing of metals
and things like that.
But they're all locallyoptimizing.
And the nice thing about activeinference is it's actually
trying to minimize surprise andif it sees something that's
(21:28):
novel, it says, okay, I'm nowgoing to commit the energy to do
that and that actually has atrade, the side effect, it's
vetting, it's probabilistic andit's really just saying I will
now commit the resources to dosomething different, because the
reality is our brains are onlyusing between 15 and 20 watts
for everything we do.
We're talking about generativeAI, needing nuclear power plants
and you know, again, generativeAI can do certain things well,
(21:48):
but I would submit, for a moreenergy efficient future, active
inference would be a much betterway.
The other thing is also, ifyou're in a field, whether it's
national security or healthcare,where I can guarantee you your
present and future is notembodied in the past training
sets, I want to use that.
So again, there are multipleflavors.
Whenever I meet with people,there's two things I find just
really fascinating right now.
One, when we talk about AI, wedon't talk about which flavor,
(22:10):
because there's computer vision,there's expert systems, we
really need to talk about whichflavor.
But then two, when you seeattempts at policymaking around
the world, it's somehow that wewant to do one AI policy to rule
them all.
And I'm like you know, when wedid IT, we had these things
called advanced data processingsystems.
Back in the 70s we didn't thinkthat our policies for IT and
health was going to be the samething for IT and banking was
(22:31):
going to be the same thing forIT and defense.
So did we forget that?
Or did someone mislead us inthinking that there was going to
be one AI positive for them all?
I think we just go back toexisting laws and say where does
AI break the model?
And again, what flavor of AI isbreaking the model?
And then upgrade them.
Because the nice thing aboutactive inference as well.
And again, I've been pitchingthis for two years and I
(22:51):
recognize that VCs right now arebullish on Gen AI you can
actually bound what activeinference even considers before
it even computes it.
And what do I mean by that?
You can say, for example, in myhouse, I don't want the
following things to ever beoccurring.
Or if I'm in a car, I don'twant my car to plow into a
building.
Or if I'm on a plane generally,planes should not plow into
(23:11):
buildings as well.
So I can bound by space.
I can also bound by time.
For example, I don't want toget notifications between these
hours and these hours becauseI'm off the clock.
I can also bound by policy, andso, if the best way to predict
the future is to create it, Ireally think we need to
encourage more businesses, moreinvestors and even countries to
say if you want a version of AIthat's more conducive to free
(23:33):
societies, let's get to it.
Speaker 1 (23:35):
Active inference
Incredible.
Now you've said that thefuture's already here.
Yes, I think you just proved itwith everything you just said.
But you assert that it'sunevenly distributed, right,
gibson?
Speaker 2 (23:50):
You've quoted Gibson
for the original quote, but yes,
what do?
you see as the most urgent blindspots for both the public and
the private sectors.
The number one, most importantthing is we have unintentionally
, in multiple technologies thathave been rolled out over the
last two decades, created asense of anxiety and a loss of
(24:14):
agency on the behalf of people.
A lot of people look at thismoment in time and are like I'm
sensing polarization, I'msensing anxiety.
I'm like, yes, you are Now.
We've seen this before.
Imagine if I told you this timein US history, where there was
massive technological progress,rise of companies.
At the same time, newspaperswere selling sensationalist
headlines that may or may notactually match the actual story.
We may have gone to war withSpain over a disinformation
event.
(24:35):
The Congress was actuallyslightly more polarized in the
1890s.
That was the 1890s.
We got through it.
The way we got through it waswe went back to the local level
and reminded ourselves that theUnited States and most free
societies are best withdecentralized operations, not
centralized operations.
So in this moment I canremember I was.
I was tackling some thingsaround 2009-2010 in which I was.
(24:57):
I was asked, as one is you havefive days to tell us what the
future of work looks like in 10to 15 years.
You have five days.
So I came back and I saidthere's gonna be multiple
technological revolutionshappening in parallel, including
AI, bio, space, the like.
These are going to create adisplacement of the types of
jobs.
It's not that there's not goingto be newer jobs and even
(25:18):
possibly more jobs created thandisplaced, but people are going
to be displaced Already.
We could see in 2009-2010 thatthe social contract of you go to
school once, high school orcollege you have the same job
for life, you never have tochange jobs was school or
college you have the same jobfor life, you never have to
change jobs was already gettingfrayed.
And so I said by 2020, we'reall going to probably be
changing jobs every three tofour years and we might be
(25:39):
working multiple jobs at thesame time, and so that's going
to create a whole lot of stressand anxiety and, in particular,
you're going to see moredisplacement of the types of
jobs in the heartland relativeto the coast.
So I said start doing taxincentives to bring jobs back to
the heartland.
I also pointed out at the time.
I said have a collegecompetition because there are
NIST codes for the type ofeducation required for a job,
(26:00):
publish those codes and invitehigh school students, college
students, anybody, to write anapp or a website that says I'm
currently X, I want to retrainto be Y, and how do I get there?
What online courses, what localcommunity college offerings and
give people agency back, andall you have to do is invite the
winner to Congress or WhiteHouse and shake their hand.
It didn't get done.
But I raise that because I thinkwhere we are now in 2025 is
(26:24):
people have been feeling thatanxiety about the changing in
their lives and theirlivelihoods for too long, and
the brain doesn't like to be ina state of anxiety.
That anxiety is now channeledto anger.
Anger is now embodied ingrievances.
You look at the Edelman TrustBarometer for 2025, which is
global 61% of respondents aroundthe world say they have a
moderate to high sense ofgrievance amongst one or
(26:45):
multiple groups.
Two 40% say it's legitimate todo an act of violence, whether
it's doxing, swatting,disinformation or physical
attack as a result.
And so when I talk to companies,when I talk to countries, I say
you can give agency back toyour customers.
If you can give choice andagency back to your clients or
your citizens, they'll love you,and that's what we need to do
(27:05):
to get through the next fiveyears, and I think that's the
biggest blind spot is peoplehave missed the fact that this
has been a long time coming andat the same time, we've been
here before and I hope we canpull out of the sort of like
dive into the mountain before wehit the mountain.
I've heard a couple of thingsthat are really exciting.
Speaker 1 (27:21):
One is I see
incredible opportunities for the
development creation of agencyplatforms.
Whatever that looks like forthe development creation of
agency platforms Right, whateverthat looks like.
Yes, I also heard you saysomething that maybe takes away
the anxiety for a lot of folksthat are watching this, which
will be that there may be morejobs created the net.
(27:44):
Oh yeah, there could be more.
I mean, you look at everythingthat happened.
Speaker 2 (27:47):
It's really just job
displacement.
Now we may work less hours, butthat's okay, because you know
the reality is.
I tell people you know, 2,000,3,000 years ago we all had to be
vigilant about our villagebeing attacked.
Now only some of us have to.
We've all get into a few.
Back in the 1800s, when we hadthe industrial age, people were
working 12-hour jobs, six days aweek in factory conditions, and
(28:09):
some people still do, andthat's their choice.
They don't do as long as hours.
So it may be, the future isboth more jobs and working less
hours.
It does raise the question,then, is who are we as people
when we think about so much ofour identity particularly in the
United States is tied up withour profession, when, in fact, I
would submit, we need to bemore than just our vocation.
We actually need to be what wedo as our advocation, what we do
(28:29):
in our communities too.
So interesting, so you thinkthat's where we're heading.
Speaker 1 (28:33):
perhaps I would think
More about what you do, how you
help, how you impact versus I'mthis or that, an accountant or
a lawyer I mean I look at what Ido.
Speaker 2 (28:42):
I mean I'm gainfully
employed, but a lot of what I do
I do simply because it matters,and I think it matters for the
future.
I think that's the case Ifpeople had that luxury that they
knew that their needs weretaken care of in a paying job
that they were already doing,and I'm not a big fan of UPI for
conversations, but anyway, ifthey actually had a paying job
they were doing, I think a lotof people whether it would be
caring for elderly, whether itwould be caring for their
(29:04):
children, whether it would beteaching or things like that
would.
That would also find meaning inadvocations in addition to
vocations.
But we as a society, partlybecause of our roots, have put
so much definition on who we arein our job.
That adds to the anxiety at themoment as we go through this
change.
Speaker 1 (29:18):
How much is life
going to change over the next
three to five years for theaverage individual, especially
here in the United States?
Speaker 2 (29:24):
I want to give you a
note of optimism, but I would
say we're going to see morechange in the next five years
than we saw in the last 20 years.
Speaker 1 (29:32):
Wow.
Speaker 2 (29:32):
And that's why what
we do now matters.
I mean, it really is day by day.
We can influence a betterfuture if we are focused on it,
but it really has to be across-section.
Speaker 1 (29:41):
And is there enough
of a focus on that?
Speaker 2 (29:44):
We're distracted.
Honestly, right now we aredistracted, and that's partly I
mean.
That's always been the case Ifyou look at US history.
Unfortunately, we aredistracted until it becomes
clear and present and obvious.
Speaker 1 (29:54):
And then we'll react.
Speaker 2 (29:55):
Yes, and that's by
design, because we didn't want a
king, we don't want a king-likeindividual.
But it does mean we are alwayslate to the party.
We were late for World War I,we were late for World War II,
we were late for other things.
But that's where you want tohave these conversations
beforehand, those relationships.
I mean.
So much of my life I havebrought to decision makers and
I've laid out the analyses.
Speaker 1 (30:13):
Even in 2009 in
Afghanistan.
Speaker 2 (30:14):
I was like why are we
still here?
It's not a country, it's 13different tribes.
Now, not to observe the issue,I said you know we could do two
things.
We could either A go to 13 newtribes on an annual basis and
offer them aid on an annualbasis, promising, per the
Pashtun Code, that they will notharm us, the West, and they'd
get it.
That's the posh new code.
Or, if you don't want to invitethat or do that, invite the
(30:35):
United Nations to play apeacekeeping role with possibly
India and or China making up avote with forces, and
unfortunately no two-star orthree-star ever got promoted
saying we're leaving, and weknow how that played out.
So part of being a positivechange agent is you bring data,
you bring reason, you bringlogic, but you also have to be
willing to say okay, at thistime they're not ready, but I'm
(30:56):
going to have this ready forwhen you finally say it's time.
And so so much of life isknowing when you want to fight
that battle and when you want tosay okay, I'm just putting the
marker down.
Speaker 1 (31:06):
Give me a call if you
need me.
It's all timing.
Yes, it really is all timing.
Now you've survived adisinformation attack yourself.
Yes, dr Bray.
What lessons did you learnpersonally from that experience,
and how can future leadersprepare themselves both mentally
and operationally for thesetypes of assaults?
Speaker 2 (31:29):
Yep, it's going to
happen.
If you're out there, if you'rewilling to be out there, someone
will take advantage of whatyou're saying, and that's why I
remind people when you go tocourt, they say tell the truth,
the whole truth, nothing but thetruth, because the way they'll
attack you is they'll take onething but not do the whole
picture.
And so you have to resist theurge in the moment to fight back
, because they've alreadyplanned the narrative and
(31:51):
everything like that.
You just have to go.
Time will ride this out, truthwill come to light eventually
and have the fortitude to say Iknow I did the right things, but
, even more importantly, I knowI helped the team do the right
things, because in my case, Iwas being a flak jacket for the
team and while it felt, you knowagain, when you're doing a life
(32:15):
of service, to have yourservice questioned can be
sometimes the most personalattack and I think that was part
of the plan.
But you just sold your on.
And the way I look at it is ifthat's the only thing I have to
give for my country, that's asmall thing.
But I did say and I recommendleaders, if you've not seen it,
there's actually a really goodvideo on YouTube.
There's multiple videos, butit's Marcus Aurelius' the
(32:36):
Meditations Condensed in the 30Minutes Wow.
And so stoicism is your friend,because it helps you step back
and say, while it feels verypersonal at the moment, I can't
control whatever they're doing.
I can't control whatever gamethey're playing.
They are using asymmetryagainst me.
What I can do is control myresponse, and my response,
(32:56):
instead of being motivated byanger and frustration, can
simply be.
You know what?
I'm just going to keep onsoldiering on and guess what?
They have no oxygen.
At that point you have robbedthe attack of any oxygen at that
point.
Speaker 1 (33:10):
And that applies to
anything.
Speaker 2 (33:11):
Yeah, 100%.
But it's hard in the momentbecause human feeling is just
wanting to come out and saythat's not true, that's not
right.
I'm going to prove it to you.
But they've already planned forthat.
And so you look at, you knowthroughout history, and again,
what Marcus Rios was looking atas well is this will happen, but
it's a badge of honor in lifeand you just press forward and
those who know you and those whowill meet you in the future
(33:31):
will actually recognize it.
And again, in my case, it wasfour years later that they came
back and said yep.
Now, of course, they came backlater and said yes, and there
was no fanfare, because we knowstory of a vindication for the
most part does not get any ofthe air coverage of oh, look,
there's horror over here.
Speaker 1 (33:46):
That's right, and so
you just have to be okay with
that.
You have to be with that Now.
You've also worked extensivelyon US strategy around.
Not only we've talked a lotabout AI, but also quantum
computing, and you mentionedearly on in this interview
synthetic biology.
I want to ask you a questionabout a recent event that took
(34:06):
place on that.
What's your view on how thesetechnologies might actually
converge, which is concerningperhaps oh, they're all
converging what guardrails aremost urgent to establish at the
moment?
Speaker 2 (34:22):
Well, so they are
converging.
So in bio, increasingly we areagain giving people massive
capabilities that wereunprecedented even just three or
five years ago.
And I look at you know, we gotout of COVID, fortunately
because of what was possiblewith vaccines.
Had we tried to do vaccines theway we were doing them 20 years
ago?
It would have taken two orthree years, and that would have
(34:43):
been devastating.
So I celebrate that I also am abig believer that the only way
we get through climateadaptation is going to be a
combination of both bio and AI.
I'm not a believer that AI andbio is doomed.
I know there's some people outthere that are like no, no, no
One.
We're confusing the fact thatknowledge of something is
different than experience.
In you know, you and I, wecould have an AI or we could
(35:04):
even read ourselves on how to dohome surgery, but we're not
ready to do surgery unless we'vepracticed a lot.
And so these people that say AIand bio is going to create
bioweapons, I'm like, I'm notgoing to say practice a lot, and
the reality is there's a lot ofmistakes that happen, and so
I'm less worried about that, butI do think I mean I've already
seen, for example, computationalbiology methods, natural
bacteria nothing synthetic inthe sense.
(35:27):
It's natural bacteria that canuse methane as a sugar source,
so it pulls a greenhouse gasthat's between 22 and 40 times
as bad as carbon dioxide out ofthe environment, uses the sugar
source and it returns nitrogento the soil, making it more
productive for farmers.
It's almost like a two-for-one.
Now the trouble is it'sbacteria.
You can't see it, but imagineif we use space-based
technologies or even drones toimage a farmer's field and say I
see you've got these methaneballoons from your cows.
(35:49):
I also see there's no nitrogenin your soil you can imagine a
service that comes in and usesthe bacteria to return the
nitrogen to the soil, get rid ofthe methane, and then it passes
over again and shows methane isgone, it's fertile again.
Maybe the state government ormaybe the federal government
gives you a tax credit becauseyou removed the methane from the
environment.
Speaker 1 (36:06):
It makes sense.
Speaker 2 (36:07):
I also see, for
example, natural corn.
There's companies coming outthat will actually capture 10
times as much carbon dioxide inthe growing process of the corn.
So I think that's how we getthrough it.
But with quantum, I meanthere's many things quantum.
But if we're talking aboutwhat's possible with quantum
computing, if companies aren'talready thinking about what's
their strategy for when quantumdecryption is possible, they
(36:30):
should, because we know thereare state actors that are
capturing data and just savingit for later.
There are strategies they cando, but you need to prepare for
that.
But also there's other thingsthat quantum gives.
I say what technology you takeit to, it also gives in the
sense that quantum keydistribution can let you know if
someone else is listeningonline.
That's a way to actually hardenyour communications.
(36:51):
Is that right?
Speaker 1 (36:54):
That begs another
question, so many questions In
terms of the way.
I don't want to single out asingle platform, but there's so
many digital platforms andespecially younger folks are
posting everything.
They're out there postingpictures and that's all they do
all day long, and then you havea political system I'll just use
(37:17):
the United States and there'ssomething called opposition
research and all that.
I think the future is going tobe bereft with a lot of
different things for a lot Imean, I don't think, the younger
folks today maybe we've alreadyseen some in politics come out.
Oh, this is a picture of thisperson or that.
But can you only imagine, drBray, what's coming at us for
(37:41):
future Supreme Court nominees,for congressmen, senators,
president, all these things?
We're still at the point wherewe're at the age where that
didn't exist for us, right.
But we're starting to get tothe point where, for the next
generation, they're postingeverything and it's all for some
(38:04):
bad actors.
They're just storing thatsomewhere.
Yep, you know like?
I saw a movie on how the KGBhad an agent on Ronald Reagan
when he was even in Hollywood,right, following up through the
ranks, didn't know if he wasever going to become president,
but they were tailing him andbuilding a file on him.
(38:27):
You know a human file?
Yes.
Speaker 2 (38:31):
What's your thought
on that?
Well, that's why I tell peopleyou know the good and the bad.
We've now given people thecapabilities of the CI and the
KGB, so you're absolutely right.
I mean, we know there's beensome high-profile compromises of
data, opm being one but othersand we've never seen that data
show up on the dark web.
So we're like, well, why wasthat captured?
And so one might ponder thatmaybe there is a state actor
that is building a regressionmodel of what are the traits of
(38:54):
possible future hires forcertain communities, and if I
can figure out what those traitsare and, like you said, maybe
through TikTok and other means,I happen to capture them doing
something embarrassing orsomething that they wouldn't
want to see later.
I mean, I saw there was a trendon TikTok that was not on
TikTok but it was like militaryTikTok, and I'm like nooooooo
what could go wrong.
but, yes, that's that'shappening and, like you said,
(39:15):
it's also possibly happeningdomestically too, and so I'm not
saying you should not haveconversations online, but I
think you should be aware.
I think, again, that's where weneed to go back to the idea that
I'm going to assert certainthings truly are for me, because
what I'm also seeing, and youmentioned, like in the last two
or three years, I'm seeingcertain cases where there's a
(39:35):
CEO of a company or a CFO of acompany got an approach they may
not have been aware that it wasfrom someone who was a foreign
agent, got put in a compromisingposition and they were either
then, essentially, then it wascaptured and they could be then
blackmailed, or they did theright thing and said no, no, no.
The trouble is it was capturedand then, using gender to be I,
you can make it look like theydid say yes, which still is just
(39:56):
as damaging.
And so what I'm trying to tellcompanies is, again you may not
care about geopolitics,geopolitics cares about you and
right now you, as companies, arethe frontline for multiple
threat actors.
What's your mechanism if amember of your c-suite or a
member of your board doessomething silly, or even does
the right thing, but now there'ssomething out there that makes
it look like it did the wrongthing?
How do they come in from thecold and say, could?
(40:17):
I would rather know that thanto have them be extorted or
blackmailed, and then they startdoing things that even sabotage
your company, but not in thecountry.
That's gonna need to be doneFor the younger generation.
We may reach a point where twothings happen.
Either one, we all realizewe're all human and we're all
frail, and that's okay, or I'mnot endorsing this strategy.
But I have actually wondered atwhat point in time will there
(40:39):
be services where you hirethings to flood the zone with
things that are questionable asto whether or not it's you or
not, again reinforcing the ideathat, unless I've endorsed this,
assume it's bump.
And that may already behappening, and so I think we
will get through it.
Actually, in some respects, ifyou look back again to the 1890s
, the 1890s was full of virtuesignaling in which people
(41:02):
believed that they were only onedimension, they were
caricatures.
Those caricatures may notactually match who they were.
I think we need to recognizethat we're all multidimensional
people and that's okay, but I'mwondering when the virtue
signaling period will end.
Listen, and you're optimistic.
I am, because we humans, Ithink every 10 years we respond
to the last 10 years and weorganically reject and you're
(41:23):
saying that you mentioned howthis is.
Speaker 1 (41:30):
Is it?
Is it multiples of theIndustrial?
Speaker 2 (41:32):
Revolution as well.
Yeah, I understand, I think we.
Speaker 1 (41:34):
I mean again this,
this is Give me a number on that
, what would you think?
How many X of the Industrial?
Speaker 2 (41:41):
At least 5X.
So I mean, as I gave theexample, that we're experiencing
more change in the next fiveyears than we experienced in the
last 20 years.
5x, yeah, at least 5X.
Speaker 1 (41:49):
And you think this 5X
opportunity is juxtaposed upon
the Industrial Revolution, foropportunities.
Speaker 2 (41:55):
Yeah, 100%, there's
definitely opportunities, and I
think what we need to be awareof is this is not just one
revolution.
There are at least five or sixparallel ones, and I actually
celebrate free market systems inthe sense that free market
systems ultimately end up withpeople having access to things
they never did.
Think about ice cream.
Ice cream used to be onlyavailable to the royalty, in the
court, france, and now we allhave ice cream, yay.
But there's this lag periodbetween when it's only available
(42:18):
to a few and everybody else,and that can be exploited to
create grievances saying, well,they've got it, you don't, or
they've got this, and so nowwe've got five or six things
happening in parallel.
That's creating a whole debate.
So we need to accelerate asmuch as possible making it so
that people actually have accessto this and it's not
centralized to just a few,because if it's centralized to
just a few, sadly autocraticregimes will target that and use
(42:42):
that to create wedges in freesocieties.
Speaker 1 (42:45):
And so local
organizations are increasingly
on the front lines of navigatingglobal disruption.
Let's talk very briefly aboutsmall and mid-sized companies.
How do they prepare for thisgeopolitical and technological
turbulence when resources arelimited?
On the back end of that example, talking to a guy recently who
(43:09):
laid off 200 people in a callcenter and contracted with one
of the generation whole AIopportunities to supplant what
he was paying $100,000 a monthfor with $1,000 right, how do
you navigate that right?
Speaker 2 (43:29):
so obviously,
business will have to change.
To not change would actually bedeath.
I do think what's useful and Irealize small businesses,
mid-sized businesses, they'redealing with the turbulence Go
back to first principles.
What do you want to do?
Well, double down on that.
Figure out your combination ofhuman and technological strategy
.
I recognize I did a PhD andthen immediately went to
(43:51):
Afghanistan, but my PhD wasactually on what was called
collective intelligence, andcollective intelligence is
defined as how do humans andmachines make better decisions,
lead to better outcomes?
Not just machines, and I thinkthat's where we have, for
various reasons.
It might be they're pursuingIPOs and things like that.
There are certain AI companiesthat are selling AI only and I'm
like no, no no, as we alreadysaid, with generative AI you
(44:13):
want the human to be.
Yes, I'll use the AI as thefirst source, but then I want to
go and triangulate it.
Is that the case?
That really was the winner ofthe baseball game in 1941 or not
?
You see that being a permanentscenario?
Oh, 100%.
When the printing press came outall these conversations we just
had they were saying it was theend of the world.
The Catholic Church was sayingit was the end of life as we
(44:33):
know it, even though they werean early investor in the
printing press.
By the way, martin Lutheractually pinned his 95 treaties
and created a great schism.
There were people lamentingthat this was going to lead to
diminished quality of textbecause it was mass printed as
opposed to hand copied.
All these things were there.
There was actual property theftand things like that.
We got through it At the sametime, while it was the end of
the world as we know it when theprinting press came out.
(44:53):
I don't think any of us want togo back to 1399.
So we are undergoing a similarend of the world as we know it.
But if we make deliberatechoices, we can make it a better
future.
And if you look at whathappened with books.
It's not like books replacepeople, it's just we use books
to actually speed up ourdecision time, have more
informed knowledge.
I think with AI this versionthat AI is going to do
(45:14):
everything I'm like it will do alot of things rote and
repetitive things.
Great, the novel.
You actually want a human AIhybrid.
I'll give an example the tailend of the last administration,
there were some export controlsthat I think they kind of rushed
out the door, I mean.
Speaker 1 (45:29):
I'm nonpartisan.
Speaker 2 (45:30):
I swore to the
Constitution, not a party.
But they kind of rushed it outthe door and on a whim I went to
a GPT of choice and I saidpretend you are a nation of 1.3
billion people in which theseexport controls are targeted on.
I want you not only to navigatearound them, but I want you to
find ways to make money doing so.
And the GPT went poof and gaveme five answers.
So I think the future isactually hybrid AI human-led
(45:51):
team.
Because my recommendation backto the National Security Council
and CFIUS is before you put outany future expert controls,
make sure you see how the AI isgoing to game.
It Is that right.
But again, you want the humanSure, of course, to look at it.
And so actually, with AlanMcCarthy, we actually have a
it's an unclassified paper wherewe also did the same thing to
an AI, where he said pretendyou're certain nation-sick
actors, how would you use thestructural organization of the
(46:14):
intelligence community as weknow it against us?
And of course it's unclassifiedbecause it's coming from AI.
And then we said OK, now do ananalysis of competing hypotheses
on how you would betterorganize the US intelligence
community to deal with it and itactually on its own I mean,
there was probably someprompting from us, but on its
own it said you've got to pairAI with humans in the community.
Is that right?
(46:34):
So they are always saying thisis what I want to do, but what
am I missing?
And it's worth knowing.
So one of the original sort offounders of early AI research.
His name was Herb Simon.
He won a Nobel Prize, but hisPhD actually was decision making
in New York State and it waslimiting.
That administrative decisionmaking is often limited to just
the things we know we, thatadministrative decision-making
is often limited to just thethings we know.
(46:55):
We don't go to the otherhorizons.
Whether or not that motivatedhim to do AI, yeah, but I raised
that because what AI can do,even generative AI, is it can
expand our horizons and say youmay not have thought about this,
but there might be a betteropportunity or a better approach
to this risk over here, andthen the humans can then qualify
it and say that makes sense ornot.
And so I'm a big believer incollective intelligence.
I realize right now, forcertain companies are trying to
(47:16):
sell just AI because they getthat massive IPO pop, but I
think we've got to empower theedge, it's got to be AI that
runs locally on your laptop oryour desktop, not some
centralized server, especiallyfor small businesses.
And if it can run locally thenwe can have that decentralized
goodness that makes the US greatand go from there, so all of
the data would be your data set.
Oh, I'm a big believer.
Either it's all your data orit's what I call data
(47:37):
cooperatives.
You enter into a contract andyou don't even need to wait for
government regulation.
This is just existing contract.
I've done it with the UKgovernment.
We're actually piloting here inthe US where you say we as in
this case it's called birth tothrees it's individuals that are
trying to make sure theirinfants get the necessary
physical, mental and emotionalcare Obviously very vulnerable
population.
We do not want that datamonetized.
So we've done a contract thathas basically said that not only
(48:01):
do these people say that theirdata will never be monetized,
but they actually haverepresentation and every three
months they make sure the datais being used only for the
purposes they've set aside.
And I would say that for smallbusinesses, you don't have time
to do that on a daily basis.
But if you're a cooperative andit actually then gives you the
ability to go to those AIcompanies and say we will let
you have our data in return forthe following things Maybe it's
(48:22):
reimbursement, maybe it'sfinancial gain, maybe I care
about Parkinson's research andI'm willing to let some of my
health data be used forParkinson's research in return
for the company promises thatwhen the drug comes out they're
not going to charge exorbitantrates.
But again, it's a negotiationin meds.
Speaker 1 (48:38):
What I'm most excited
about is your optimism 100%,
and you're someone who's thrivedin let's talk about this.
I just recently saw the MissionImpossible film.
You're that guy right andliving on the edge of helping to
save the world and change theworld and stay ahead of some of
these trends and technologies.
(48:58):
What is one piece of advicethat you'd give to future public
servants across the world,technologists, civil leaders, et
cetera, about leading withintegrity in the face of
uncertainty?
Speaker 2 (49:17):
So you know, the
world has always had massive
change.
I think we have a falsenostalgia for the past.
I remind people in 1971 to 1972there was an 18-month period in
US history where there was morethan 2,500 bombings in 18
months.
Speaker 1 (49:31):
We feel like this is
true.
Speaker 2 (49:32):
I'm like, oh no,
we've forgotten.
We can imagine 1960s, 1920s,and so this is where integrity,
but I would also say competenceand benevolence.
These are three things thathave shown that, if you operate
with integrity, competence andbenevolence, people are willing
to trust you.
I define trust as thewillingness to be vulnerable to
the actions of an actor youcannot directly control, and I
would submit, right now, thetrouble is, there's so many new
(49:53):
technologies out there that aremaking people uncertain about
benevolence, uncertain aboutcompetence, uncertain about
integrity, and that's why wehave this crisis of trust, plus
the fact that people feel liketheir lives are being disrupted.
So three things One, establishyour network of people that are
willing, and you've given thempermission, to tell you when
you're doing something stupid orcrazy, because the reality is
(50:14):
we're all gonna have blind spotsbut have your personal board of
advisors, whether it's, you cancall them, and they'll tell you
in three days.
They can tell you in threemonths.
You need to have that becausewe all are human.
The second thing, though, isalways take the case to look at
what the data is telling you andask the question how long does
this data have to be, or howincomplete does it have to be
(50:34):
for me to change my decision.
It's called decisionalelasticity.
How much do I need to changeand then finally always plan for
a pivot?
If the decision you make inthat moment needs to change,
people back themselves in thecorner when they think that one
the decision you made is thedecision you have to hold on to.
The reality is we're all goingto have new data.
Speaker 1 (50:53):
It's not about
anything.
Is it about anything?
Speaker 2 (50:56):
Yeah, it is.
I mean, it really is.
You just have to pivot andeverything like that.
But people anchor to theirdecision, right, and the trouble
is it's okay to change yourmind.
That's actually human.
That's learning.
Speaker 1 (51:07):
That's growth.
Speaker 2 (51:08):
I actually tell
people if and if you're not
growing, you're not going tofeel uncomfortable, and so it's
moving forward as to how we moveforward.
So I think again, it's reallyabout have your personal board
of advisors.
Make sure you look at the databut say how long does it have to
be for me to change my mind andthen finally plan for pivots
Plan for pivots.
Speaker 1 (51:28):
You know, this
morning, this today, these
interviews and this interviewspecifically as well I mean it's
profound in terms of where weare.
I want to thank you for yourinsights, your perspective and
your service to our nation.
Thank you, George.
Dr David Bray, Thank you sovery much.