All Episodes

January 26, 2025 72 mins

This episode explores how technology, specifically AI, is reshaping military strategy and national defense. Colonel Jason Hansberger and Author/Entrepreneur Tim Henderson discuss the implications of AI  in warfare, the ethical considerations surrounding autonomous weapons, and the U.S. military's strategic innovations.

• Examining the significant U.S. defense budget and global military expenditure 
• Understanding the new age of Great Power Competition
• Addressing the role of AI and innovation in military operations 
• Discussing the ethical implications of autonomous capabilities 
• Highlighting DAF Stanford AI Studio’s mission and objectives 
• Exploring practical AI applications within current military contexts 
• Assessing the future of military technology and decision-making processes 

Colonel Jason Hansberger and Tim Henderson join us for an eye-opening discussion about the ever-evolving world of technology and defense. Colonel Hansberger, with his expertise in Air Force strategy, and Henderson, a former submarine officer and author, offer a unique lens on how artificial intelligence is transforming military operations. As we navigate a landscape marked by geopolitical challenges, such as North Korea's involvement in Ukraine and Donald Trump’s re-election, we examine America’s strategic role on the global stage and the ethical considerations that come with the use of AI and autonomous technologies in defense.

Our conversation takes a broader approach as we explore the global competition for AI supremacy. We analyze the economic and political ramifications of controlling semiconductor manufacturing and advanced computing technologies. This episode looks into the historical ties between military applications and technology, the controversies that arise when tech giants merge with defense projects, and how initiatives like the DAF Stanford AI Studio can bridge the gap between academia, the industry, and the military. We even consider the profound impact AI could have on decision-making processes in complex environments and the nuanced dynamics of autonomous warfare.

As we wrap up, we get personal with our guests, learning about their aspirations beyond their military careers. Tim shares his plans to publish novels inspired by military service, while Jason reflects on his life philosophy of seeking significance over success. Together, we address pressing recruitment challenges faced by the military and propose innovative solutions that could enhance outcomes.

Jason Hansberger: https://www.linkedin.com/in/jason-hansberger-b1b15374

Jason Hansberger is a leader in technology and strategy currently serving as the Director of the DAF-Stanford AI Studio and AMC/CC Special Advisor & Director, Technology Capability Development for the United States Air Force

Tim Henderson: https://www.linkedin.com/in/tim-henderson-53772116

Tim Henderson is a multifaceted professional with a background spanning military service, finance, and real estate development. He is a former submarine officer and nuclear engineer from the United States Naval Academy, where he served in various roles on a ballistic missile submarine, ultimately as a Missile Officer.  For the past 18 years, he has been a multifamily developer and general contractor in Silicon Valley and is the President of Cypress Group. He is also the author of the novel "Nobody Wins Afraid of Losing".

Producer: Anand Shah & Sandeep Parikh
Technical Director & Sound Designer

Website: https://www.position2.com/podcast/

Rajiv Parikh: https://www.linkedin.com/in/rajivparikh/

Sandeep Parikh: https://www.instagram.com/sandeepparikh/

Email us with any feedback for the sh

Mark as Played
Transcript

Episode Transcript

Available transcripts are automatically generated. Complete accuracy is not guaranteed.
Rajiv Parikh (00:05):
Hello and welcome to the spark of ages podcast
we have another special episodewhere we're talking about
the topics of technology andAI and we're going to learn
specifically about defensetechnology and how the U.
S.
Air Force plans to leverage U.
S.
innovation.
For our national defenseand I have some two really
amazing guests today.

(00:25):
So this is, you know, Iget to meet these amazing
people in the world and thenI get to bring them to you.
I have Colonel Jason Hansberger.
So Jason is a leader intechnology and strategy,
and he's currently servingas a director of the.
DAF Stanford AI studio andAM slash CC special advisor
and director technologycapability development for

(00:47):
the United States Air Force.
Prior to his currentrole, he participated
in the SECDEF executivefellowship at Autodesk.
He has also served as thedeputy director at the HAF
strategic execution groupand as a commander of the
first airlift squadron.
So he flies planesand he does policy.

(01:09):
Jason holds a Master ofArts in Regional Studies
with a Southeast Asiaconcentration from the
Naval Postgraduate School.
And he also did his undergradat the Air Force Academy.
He is focused on applyingstate of the art solutions
in AI and autonomy tosolve Air Force problems.
So we're going to really,this is going to be
really interesting.
We're going to learnabout the special program

(01:30):
that he's, he's driving.
So Jason, you're here today, butyour views here are your own.
Yes.

Jason Hansberger (01:36):
Yeah.
Yeah.
The, the important disclaimeris that all views expressed on
this podcast are those of, ofme and not of the Air Force.
Okay.
Or the gov or the US governmentshould be construed as such.
And any mention of any, youknow, product or, or service or
anything like that is also notan official endorsement of, of,
from me or from the government.

Rajiv Parikh (01:53):
And then we have my friend, Tim
Henderson, and Tim is amultifaceted professional
with a background spanningmilitary service, finance,
and real estate development.
He is a former submarine officerand nuclear engineer from the U
S Naval Academy, where he servedin various roles on a ballistic
missile submarine, ultimately asa Marine, as a missile officer.

(02:15):
He holds an MBA fromHarvard business school.
He was in the same.
Classes me or same year as mefollowing his military service.
Tim spent four years asan investment banker and
venture capitalist in Bostonfor the past 18 years.
He's been a multifamilydeveloper and general
contractor in Silicon Valley andpresident of the Cypress group.
One of the reasons I wantedTim here today is he's the

(02:36):
author of an upcoming novel,nobody wins afraid of losing.
It's a really interestingupcoming book that talks about.
Where we're going in thefuture with some of these
autonomous weapons andwhere big powers are going.
So I thought he'd be great tohave him there because he's
done so much research and ofcourse Jason here because he

(02:57):
has he's living it part of partof what's driving the future.
Some of the key takeaways youcan expect from this episode.
AI's role in transformingmilitary operations, challenges
in acquiring and developingAI talent for military
applications, ethical andstrategic considerations of
autonomous weapon systems,those three topics and more.

(03:17):
Gentlemen, welcometo the spark of ages.

Tim Henderson (03:19):
Thanks Rajiv.
It's great to be here.
Delighted to be here.
Thank you.

Rajiv Parikh (03:22):
All right.
I'm super excitedto have you here.
This is a veryinteresting topic.
Uh, we have a lot goingon in the world today.
And, uh, let me just set upsome context so that, uh,
we can guide our discussion.
Uh, some things that youguys probably already know.
Um, the U.
S.
defense spending is, uh,was in 2023, 916 billion.

(03:45):
It accounts for 40percent of, uh, global
military expenditures.
It's greater than the nextnine countries combined.
However, we have somecompetitors and, uh, some
strong competitors for everydollar, uh, China allocates
to the military, the U.
S.
spends more, twodollars and 77 cents.
However, maybe we can argue thatChina, because of where it is,

(04:07):
gets more bang for the buck.
That would be aninteresting topic.
U.
S.
military spending is 3.
1 times higher than China'sestimated 296 billion dollars.
So it's a really significantamount of money that we spend.
However, we have troopsall around the world.
Uh, we have, uh, what?
I think 12 aircraftcarriers, uh, troops in
over a hundred countries.

(04:27):
And, um, really the U S is inmany ways, the guarantee, uh,
guarantor of, of stabilityand peace around the world.
And so it's reallyimportant that America
continues to stay ahead.
And that's why this issuch an interesting topic.
So let me start withthe first question, and
this is for both of you.
At the end of 2024,we saw some incredible
developments globally.

(04:48):
North Korea sending troopsto fight for Russia in
Russia for against Ukraine.
We saw Assad's fall in Syria,Donald Trump's reelection
victory, to name a few.
So I'd like to set the stage forour listeners from your point of
view in terms of where America'srole is as we start 2025.
And then we can get intothinking about what Where

(05:11):
we're heading in terms ofhow we develop strategy for
this environment, how weshould develop technology
for national defense.
So Jason, you wantto kick it off?

Jason Hansberger (05:18):
Yeah, sure.
I'll, I'll kick it off.
So maybe, maybe the firstis just on the money piece.
It's, it's the publicpublicly available budget.
I don't, I don't know if, uh,China's budget is actually
in 200 billion or a tire.
It's hard to know.
So good metric.
Yeah, I think, um, if I wasto describe the, the global
order, we have a phrasethat we use called global

(05:38):
global power competitionor Great Power competition.
And for me, a lot of peoplecompare, Hey, maybe we're
moving into a new Cold War.
And I actually think it'snot a great description.
Um, and the way, the way Idescribe it is the, the Cold
War was kind of the emergenceof two competing models for,
for governance and economics.
And, you know, both of thosesystems came out of World

(06:00):
War II and then grew up.
In kind of these multipolarworld where there wasn't a
ton of interaction that, youknow, there was clearly in the
frontier interactions betweenthe, you know, the Soviets
and the US led Western order.
But really they grew up both.
And then there was kind of,the competition essentially
was who's got the best order?
And you know, they figuredthat out through the Cold
War and in the 1990s.
You know, the West kind ofemerged as the victor of

(06:21):
the Cold War, but this, thisis a different competition.
Uh, and I call thisthe, uh, great power
competition for everything,everywhere, all at once.
And the reason I call itthat is there's, uh, it's
some important differences.
And the first is, is the natureof, or not the nature, but the
kind of the character of thiscompetition, which is, it's

(06:42):
not that there was two separatemodels growing at the same time.
It's that the, you know,the Chinese economy really
grew in, in conjunctionwith the Western economy.
And what we're looking at nowis more of a messy divorce, uh,
than, than feuding neighbors.
And so as this competitionplays out and as the heat

(07:04):
does or doesn't rise, youhave to figure out how those
things get disentangled.
And in addition to, you know,not only the entanglement of
the economies that didn't existin the cold war, there's also
the technological and connect,you know, connection here, you
know, today, uh, It's completelydifferent than it was 40 years
ago or 30 years ago, and sothere's millions or 100, you

(07:24):
know, hundreds of millions ofinteractions between Chinese
and and Western elements and,you know, every single day and
China and the United Statesthen have to make a decision
about whether or not thoseinteractions generate negative
outcomes that need a response.
And so it's not thatthere's kind of a world
here in a world there.
It's that they're in this,they're occupying the same

(07:46):
spaces, a lot of the samedomains, and there has to be
a lot more decisions about.
What to react to, what notto react to, and so this is
a much more intertwined anddifficult situation to manage
than what the Cold War was.

Rajiv Parikh (08:00):
It's a really good point, right?
I mean, we're not, uh,it's not like we, before,
you're right, it was verymuch, if you go back to the
70s, 60s, 70s, 80s, right?
It was very much twodifferent worlds, two
different economic systems.
We were distancedfrom each other.
Now we're very much intertwined.

Jason Hansberger (08:15):
That's right.
It is a different gameand the interdependencies
make a big difference.
And, you know, there'sarguments to be made that
the interdependencieshelp maintain the peace.
And so relationship changesevery time something's
disentangled, you know, becausethen the price for conflict
decreases if it doesn't haveas much of an effect on on.
You know, your day to dayeconomic activity, Tim,

Rajiv Parikh (08:36):
your thoughts

Tim Henderson (08:37):
1st of all on China.
I echo.
I agree with Jason's comment.
They're hard hard to knowwhat their actual budget is.
The other thing isour expenditure per
person per military.
Person is much higher thanthere's, you know, the,
the current compensation,the benefits package.
And so I think would bemore interesting to do

(09:03):
an analysis adjusted bythat factor, like a, like

Rajiv Parikh (09:06):
a purchasing power parody version of it.

Tim Henderson (09:08):
Yes.

Rajiv Parikh (09:08):
Which I'm sure they've done.

Tim Henderson (09:11):
Yeah.
Uh, so on on the world order.
I am a, what I wouldcharacterize as an optimistic
new conservative, uh, butin, in spite of our record
in, in recent wars, so youlook at Iraq was, uh, a war

(09:31):
predicated largely on theweapons of mass destruction,
which did not exist.
You know, Afghanistan, weinvested over 20 years,
we invested more than 2trillion dollars and thousands
of American lives and amonth after we withdrew.
The Taliban had, had, uh,entirely retaken the country.
So, you know, that, thatwas a failure, uh, and to

(09:52):
call it anything other thana failure, I think is, is
counterproductive to learninghow to do things better as a,
as a country and as a military.
Going forward, specificallyon the topic of the military
use of AI or the highlyautonomous weapons, which I
believe we're going to come to

Rajiv Parikh (10:13):
really interesting point of view there.
I mean, we're definitelygoing to get to that
part of the discussion.
In fact, that setsup my next question.
So, um, if you think aboutthis geopolitical landscape.
But Jason brought it up thatthe notion of great power
competition by moving away frommore terrorist oriented threats
to great power competition.
So how do we leverage, you know,US military is, you know, how

(10:37):
do we, how do we leverage AIand, and these autonomous, maybe
semi autonomous, eventuallyautonomous to maintain a
competitive edge, right?
I mean, uh, access to thetechnology is much easier to
attain than it used to be.
Right.
Before you're building, uh,weapons systems where you
could be much further ahead.
They were superexpensive to develop.

(10:58):
Nowadays with AI, it'sa little less expensive.
So Jason, maybe youcould just touch on that.

Jason Hansberger (11:04):
Okay.
So when it comes to competing,uh, for artificial intelligence,
I don't think the first placethat we can beat artificial
intelligence would be in themilitary application or military
employment of artificialintelligence or autonomy.
It's really, I think thecompetition exists, uh,
primarily economicallyand politically for The
ability to design andmanufacture and field.

(11:26):
Computing artificialintelligence technology, and
I think you see this with likethe semiconductor is a great
example of trying to controlthe state of the art and compute
through the control of who canand can't have access to the
state of the art and computingthat requires partnerships from.
You know, extreme ultravioletlithography machines

(11:48):
down to, you know, thefabrication of the fab of it.
And that's essentiallythe reason that we're
competing there is becauseI used computing artificial
intelligence as a, we canget into it later, but, you
know, kind of a theory ofan era defining technology.
And if it's, you know,the era is defined by who
best, you know, wheels,you know, Computing and

(12:10):
artificial intelligence, thenit makes sense to compete
for those things, not onlynow, but into the future.
And so we were thinkingabout these tariffs
or things like that.
It's about, uh, who can controlthe path of the future of those.
And I, and then that then,of course, can lead to
undesired escalations becausethe future finds its way
to the present and then.
You know, other as the heatof that competition grows,

(12:33):
sometimes you lose agencyand it's not, you know, you
may not want an escalation,but when participating in
international relations, like,sometimes you, you spark the,
the, the escalation, you know,that, that, that goes beyond
your control to, to control.
And so there's, there'sdangers and even in
economic and politicalcompetitions for these things.

Rajiv Parikh (12:53):
So it's kind of intertwined right that
notion of it's not just a puremilitary and technological
technological development.
It's a access to You need astrong economic Capability
to build these things tobuild these capabilities.
And so that's partof the competition.
So Tim your thoughts on that.

Tim Henderson (13:10):
So the story of technology and the military
is intertwined right SiliconValley grew out of military
technology largely, but in thepast 30 years, there's been
a divide, uh, that has growna chasm in some sense between

(13:31):
Silicon Valley and technology.
So, 4, 500 AI researchers,for example, have signed
an open letter saying that.
They would like a ban onhighly autonomous weapons,
and they specifically don'twant their projects to be
used for military purposes.
So, Google, for example, hadProject Maven, uh, which was,

(13:54):
Jason can probably speak tothis better than I can, but
my understanding was thatit was a project that used
artificial intelligence.
Uh, sort through and categorizethe huge volumes of data
coming from Predator andother drones, Reaper, Predator
drones, and to make some senseof that in, uh, in, in ways

(14:16):
that humans were overwhelmed.
Right?
And when that came out,that caused a great,
great deal of controversy.
At, uh, Google and in SiliconValley and even worldwide.
Same thing with ProjectHoloLens at Microsoft, it was
a set of augmented realitygoggles that could be used by.
By military people, soldiers,but, you know, it's still

(14:39):
in the formative stages.
It's still a beta technologyat best, but, you know, the,
you can envision in the future.
Someone has augmentedreality goggles.
They're in a firefight andthe AI is telling them.
Not only networking themwith with the cloud and
other soldiers, but basicallyinstructing them what to do.

(14:59):
And that caused a greatdeal of controversy.
So we have this kindof schizophrenia in our
country over whether wewant to use technology
for the military or not.
And I, and I'm hoping that weget into this because I see.
The highly autonomous weaponsis a very interesting subset

(15:22):
of technology, because I thinkthey, they are overwhelming
as to my earlier thesis,they, they will be the sine
qua non of military power,but the majority of people on
the planet don't want this,you know, and I can go into
great detail on that as well.

Rajiv Parikh (15:39):
So that leads a little bit to what
you're doing with, um.
The DAF StanfordAI studio, right?
I mean, part of, there's a,there's this, I think some of
this is from the feeling that,you know, Tim talked about how
Silicon Valley kind of grew outof, uh, military applications,
the original semiconductor, theoriginal, um, A microprocessor

(16:02):
was developed or it ordramatically enhanced because
of its military applicationsin missile technology.
And so, um, some of whatyou're doing here is
you're building bridgesbetween, uh, universities,
companies and the military.
So maybe talk about theproblem and what you're
looking to actually solve.

Jason Hansberger (16:23):
So kind of my very big takeaway macro
view is that the humans definethemselves in terms of the
like air defining technology,like we define ourselves
in terms of technology.
I start this argument by it'snot just a technology, it's
the technology that's mostimportant in building both

(16:43):
economic and military advantage.
And it has to do both.
And so typically thistechnology would be very
ubiquitous and useful acrossmany, many, many, many things.
And so the very first ageof humans we think of and
we call is the stone age.
Right.
And the reason we call itthe stone age is because
the manipulation of stoneinto tools are the useful
things generated this durableadvantage and ubiquitous

(17:05):
advantage, you know, in, in bothmilitary and economic sense.
And then that was displaced by.
Bronze, which was displacedby iron, which then, you know,
we call it the industrialrevolution, but I really think
it's the electrical revolution.
It's really the ability tolike generate and manipulate
and distribute energy in orderto magnify human activity.
And then the next, youknow, the 1950s, we kind

(17:26):
of have this idea of.
People are like, we're enteringthe nuclear age, and this was
a common belief, but we didn'tever enter a nuclear age.
And the reason we didn't isbecause it was too limited.
It was, you know, as aweapon, it didn't displace all
conventional military activity.
As a energy source,it didn't displace all
conventional energy things.
And it had some limiteduse of medical, but it

(17:48):
wasn't wide enough to bean air defining technology.
Where I think we're at today nowis, is the compute and the AI.
Uh, or it's more important toapplication, which is artificial
intelligence and autonomy.
And so a couple years ago, as Iwas working at the headquarters
Air Force, the, we weren't, tome, we weren't talking enough
about what I was thinking ofas the air defining technology.

(18:11):
And it's not just formilitary application, it's
for national security.
It's really rooted in economic,you know, economic strength
and growth and dynamism andthe same with the military.
And if, if this is thetechnology, then we really
need to be At the place whereit's a state of the art to
ensure that we're advancingour national interest in the
advancement of these things.
And that they're being developedin a way that's responsible.

(18:32):
And so, through, uh, my partnerhere, who is a PhD student, just
recently defended his thesis.
Um, you know, he, I'm tellingthis story, it's really
deductive, but it's, it's,there's a lot of intuition, too.
It's, you know, him and I,at the same time, had the
same feeling, and we'reconnected via a mutual friend
that said, You guys shouldprobably work together,
and you should do it here.
Um, if what you want to do ishave some kind of influence into

(18:55):
where the research is going forAI and autonomy and compute.
And so that's, you know, that'show we came to be here is, you
know, I really very stronglybelieve this is the technology
that will drive humanity.
And I think the thing thatdisplaces compute as the
next era, I don't knowwhat it's going to be.
But I know that if wesaw it today, it would
appear as if it's magic.

Rajiv Parikh (19:15):
That's right.
It does feel like that, right?
When you use these things, itdoes feel, you know, just when
we play with ChatGBT, and I feedin my MRI results, you know, it
seems to, it not only tells mehow, it gives me a breakdown, it
sends me an image, and it tellsme what I should do next, right?
And I showed it to my surgeon,and she's like, Wow, that's
Well, at least you need me to,to do the actual work, so it's

(19:38):
pretty, it's pretty magical.
So part of, so to simplifysome, I mean, to, to talk about
what you're doing today, right?
This whole program is aboutbringing the different
groups together, right?
And it's, it's, it's a, it's oneof the many different programs.
Right.
The defense department runs anddifferent, uh, you know, the
military groups run, so maybejust talk about what you're

(19:59):
trying to accomplish with it.

Jason Hansberger (20:01):
The way that I see artificial
intelligence employedtoday, kind of the existing
technology is that someoneis an expert in something.
They're an expert coder.
They're a doctor.
They're a they're an animator,you know, and they also happen
to have enough expertise torecognize how to employ an
artificial intelligence tool.
And then they take thattool and they apply it in

(20:22):
a digital domain or a veryhighly structured physical
domain, and they can do somepretty incredible increases
in their productivity.
But that breaks down really,really quickly when you try to
apply an artificial intelligencetool in a highly complex domain.
Thanks.
Domain or something with highdimensionality of, you know,
it's called out of distribution,but, you know, essentially

(20:43):
like your if you're usingyour artificial intelligence,
it will encounter an out ofdistribution event, which is an
edge case, you know, and thenit doesn't know what to do.
And so when we bring it, ifwe want to bring artificial
intelligence tools into thephysical world in a way that's
trustworthy, safe and reliable,then we need to advance the way

(21:04):
that we build our algorithms.
We need more modeling andsimulation to reduce the same
real gaps so that I can moreaccurately iterate and build.
Build the models that Ineed, and I need a more
efficient compute to bringthis kind of really inference
and optimization to kindof edge devices that that
the military want to use.
And so our research centersreally focused on these 3 kind
of research vectors, and that'sthat's what we're here to do is

(21:28):
to advance in those 3 researchvectors, because we think
that's what we're going to needto feel the kind of autonomy.
I mean, I don't when Idon't know if I can get into
this later, you know, butthe kind of autonomy I'm
envisioning is not a from thevery start to the very end.
You know that I don't thinkwe have the technology now or
even in a very near in the nearfuture to fully autonomize the

(21:51):
activity and a physically highlychaotic physical environment.

Rajiv Parikh (21:53):
So right now, right now, what you're talking
about is, yeah, we're usinga lot of, we are using AI
techniques, it's not likefully autonomous agents yet,
because when you get into thereal environment, um, you know,
beyond the compute environment,when you're getting out into
the real world, all kindsof things are happening that
may not be well characterizedand well understood.

Jason Hansberger (22:14):
Yeah, I mean, just, just look at like self
driving cars, how hard this is.
Multiple trillion, you know,trillion dollar companies
have tried and failed toemploy self driving cars.
And that's, we've structuredthe environment for
those things to work.
And we have millions andmillions of miles of data.
And it's still, you know,you're still getting a
disengagement rate that, youknow, that doesn't really

(22:35):
make it fully autonomous.
But they're doing that in apretty friendly environment.
Now imagine if all thepedestrians and everywhere
that it's going and all theother cars are purposely
trying to make that car crash.
The disengagement ratewould be through the roof.
It really would bea usable technology.
It assumes within it, you know,a cooperative environment,

(22:56):
but if you're going totake autonomy into a combat
environment, it's the opposite.
Everything that your competitorenemy is doing is going to
try to make it more complex,more chaotic and introduce
edge cases that aren't goingto make that autonomy useful.
And so, you know, whenwe, when we think about.
Autonomy.
It's to me, the design principleis really about maximizing

(23:18):
using human cognition, notdisplacing human activity.
I think that's the, youknow, that's just not the.
It's not technologicallyfeasible within the near future,

Rajiv Parikh (23:29):
I see.
So if you think about it,right, the closest thing
we can see to a real lifeenvironment, it's what's
happening in Ukraine, right?
In the Ukraine, right?
There's initially a set ofdrones that came out and
caused asymmetric damage.
To more expensivevehicles, tanks, missile
launchers, et cetera.

(23:50):
Then eventually, uh, Russiacomes back and they are
able to use radar jamming.
They're able to usemaybe counter drones.
And so you're in thisenvironment where whatever you
create, there's going to bea counter to it, and it's a.
Much different environmentthan driving down the freeway,
even though it might seemlike a combat environment.

Jason Hansberger (24:07):
Yes Yeah, yeah, you know those
you know the combativedriving at least it's it's
it's just aimed at their ownself interest of getting home
sooner Not at making sure thatyou crash on the way there

Rajiv Parikh (24:20):
Yeah.
So, so like, what are someof the practical, like you're
saying that in today's world,there's ways that you can use
AI and autonomous technologyto just improve, improve the
development of systems orimprove the testing of systems
that are, maybe you could justtalk to some of those projects.

Jason Hansberger (24:40):
Yeah, this is, so one of the projects
that we're working on, um, is,so the way that the airport
is really modern, it's usingmachine learning techniques
to modernize the way thatwe go about test flight.
And so right, right now weemploy a lot of engineering
best practices is what'scalled to design a testing

(25:00):
profile to explore the flightenvelope of an airplane.
So a flight envelope anairplane is like how, what
are the parameters in whichan airplane can fly without?
Departing flightcrashing and the way that
we've previously done.
This is we said, well,the plane needs to be on a
negative 20 degrees down.
Dive pulling three G's withwith the gear door open

(25:21):
or something like that.
And if the test pilot missesany of those parameters within
kind of an arbitrarily defined.
window, then they saythat one doesn't count.
They throw it out.
And so what we've said is,well, there's probably a lot of
useful data, even if you don'thit this, you know, heuristic,
heuristically driven point.
So let's just use that data.

(25:43):
And so by applying this machinelearning technique, we're now
able to use all that data.
And if that's the case,test pilots, it takes
typically six attempts forevery one testing point.
And so now we're ableto say, Hey, we're.
We could do this in six timesless because you're gonna hit
your point every time becausethe tolerances aren't so tight.
And so for six times fasterand for six times less money,

(26:06):
we can achieve the same kindof, uh, surety on understanding
the parameters of flight.
And so this is kind ofa great example of of
our approach, which is.
To really go inside an AirForce organization or, you know,
a problem holder and reallyhelp them to find the problem
that then we can connect to anoutside academic solution if

(26:26):
it requires research or justa commercial off the shelf
industry state of the art.
To help them solvetheir problem, or maybe
a combination of both.

Rajiv Parikh (26:35):
So that's really cool.
It's kind of so as youknow, as you guys know,
I'm into marketing.
Um, and so, so what's your,so is that your go to market?
So you have this, you know,you have your, you have
problems that are happening.
When you're developing thesystems at the Air Force, and
now you're saying, well, howdo I connect research or how
do I connect companies to this?

(26:56):
Yeah.
What's your go tomarket for that?

Jason Hansberger (26:57):
Yeah, my go to market is two fold because
I have to, you know, my primarycustomer is the Air Force.
Uh, you know, and so, youknow, my first, my first, you
know, value proposition is Ican help you define a problem.
Um, and so a lot of timesin the Air Force, Air Force
and other places, right, isthat I use this example, uh,
to kind of understand wherethe problem defining in our
approach is that one morningI wake up, I have four kids.

(27:21):
So, you know, thismakes sense to me.
I wake up and my spousesays, or my wife says, Hey,
we're out of baby formula.
You need to go to the, tothe store to get some more.
And so, you know, from amilitary sense, like, okay,
let's do a mission analysis.
We had a logistics failure,but now it's irrelevant.
We're gonna have tofix that in the future.
But now I have this no failmission, you know, get the

(27:42):
formula or the baby dies.
So I say, you know, I saluteand I head out to the car
to go get the formula.
But when I get out tothe car, it's snowing.
And so you may say, okay,now we know the problem.
The problem is that.
I need to be able to get tothe store no matter what.
It doesn't matter theenvironmental conditions, so,
but I'm going to solve thisproblem with a six wheeled

(28:04):
vehicle and 36 inches ofground clearance and all wheel
drive with a snow pile andsome autonomy built into it.
You know, and the problemthough there is that I haven't
really defined the problem.
What I've done is done anenvironmental analysis.
And if I go, and this iswhere we come in, right, is
that we're going to bring thetechnical expertise to help
you get one step further, whichis where the real problem is

(28:26):
a coefficient of friction thatexists between your tires and
the road or the lack thereof.
And so there's a mechanicalsolution instead of tires
or chains or a chemicalsolution in the terms of salt
that we could apply to thisproblem and you could be on
your way using the existingtechnology that you have.
And so our kind of value inour position first is let
us help, let us bring thetechnical expertise to you.
Problem holder operate, youknow, operator so we can

(28:48):
really get to the point wherewe can either and this is the
fastest is a novel applicationof existing means which you
could very quickly fieldand, you know, spread to an
enterprise or this is slower,but sometimes necessary.
The development of a noveltechnology and that requires
us to go do research.

Rajiv Parikh (29:08):
So, and then, so what you're going to market in
that, so you're saying part,part of the problem is in the
past, you might have designeda vehicle that would, you know,
be able to the vehicle forthat one significant situation
where you have to be 36 inchesabove the ground and be able
to go through snow, right?
And you might have designedthis really special vehicle for
that, but instead you're like,no, this is a, this is not going

(29:30):
to happen every time it's goingto happen sometimes maybe, but
I'm going to design somethingthat can be more adaptive.
So your solution is to helppeople understand those
problems and then bring outpotential solutions from the,
from different constituents.

Jason Hansberger (29:44):
Yeah, I think, you know, one, it will
help you solve the problem.
And then two, we can connectyou to, you know, the state
of the art in both industries.
Because part of that wouldbe, well, I just didn't even
know that someone sold chains.
So I didn't know thatthere was salt that you
could store on the road.
I didn't know that, you know,someone had already developed
a studded tire, you know, andif you had, if you had known
that that product existed,it probably would have pulled

(30:04):
you further into the problembecause you would have said,
no, no, no, no, wait, guys,we don't need 36 inch tires.
For this one time, we couldjust put studded tires on the
car and we'd be just fine.
You know, so part of it isjust bringing an awareness of,
and I don't have it either.
Right.
And so, but it's, it'sbuilding a network and having
geographical location gives youa better shot at knowing someone
who might know of the solution.

Rajiv Parikh (30:26):
And are you offering funds
for those companies?
Is this, is this part of that?

Jason Hansberger (30:30):
Yeah, for the, so there's two is, you
know, the, then we find acompany, we can put a company
under contract through, youknow, different mechanisms.
Yeah.
Like federal acquisitionregulations, but there's also
some non federal acquisitionregulations, you know, that
we've applied to all theother transition authorities.
It gets really arcane, really,you know, esoteric really fast,
but we do have mechanisms thatput people under contract to,

(30:52):
you know, the problem holderwill have the mechanism to like,
put someone under contract tohelp them solve their problem.
Um, like we just hostedAfworks is kind of our
innovation acquisition.
Uh, and we just did a two dayworkshop to help identify it.
Uh, we're building an autonomyarchitecture for Air Force
Special Operations Command'sautonomous technology, and

(31:12):
we had 40 plus vendors showup for two days and help us
define problems, and thenwe're going to evaluate this
architecture for, for utility.
So that's, you know, that's one.
And then the other is garneringwhat's called research and
development funds, you know, to,to then bring to Stanford to, to
conduct research here if needed.
And our primary partners are theAeroAstro Electrical Engineering

(31:34):
Computer Science Departments.

Rajiv Parikh (31:36):
That's super cool.
It's a way to get thingsdone, get things done
quickly and, and enablethings to, to move faster.
So, you know, when you talkabout highly autonomous
weapons, so Jason, this ismaybe your own personal view,
not, you know, the, the AirForce point of view, where do
you think things are going?

Jason Hansberger (31:50):
Yeah, so I mean, I'll say the, you know,
the first place it has togo is to develop algorithms
that can work with people.
So, you know, the way.
I need an autonomoustool that I can task.
And so, you know, we cansay at a very low level,
what do I want it to do?
I want it to, to fix avideo problem or an audio

(32:13):
problem, but in the physicalenvironment, let's say I want
it to fly from point A topoint B and in a peacetime
environment, that's a pretty.
Pretty standard, very wellstructured, especially in the
US air system, like this verywell structured environment
where you could you can buildan autonomous solution to that.
But as it does that, what Iwhat I really needed to do
is to if I give it a task toaccomplish, I need to either

(32:37):
accomplish the task given theconstraints and restraints
that they gave it up front.
Or I need it to recognize whenit can't do it with the level
of surety that I've told it to.
So I may say, Hey, if youare 90 percent sure that you
can accomplish this task,then keep going and do it.
But as soon as you realizethat you're not at that
threshold, I need you to raiseyour hand and say, I need a

(33:00):
human intervention, or I needto let you know that I'm not
going to accomplish this inthe way that you thought.
This is actually really hard.
Uh, it's probably the onlything that's, it's one step
less hard than saying, Gofrom takeoff to landing and
do everything in between and,you know, never had needed any
intervention once up below.
That is to say, get thistask done, but then recognize

(33:22):
when you need your help.
But the important thing is thatthis is how humans work, right?
If you Rajeev, if you and Iare working together, you're
saying you're my boss andyou're like, go get this done.
And then I go do it untilthe point that I reached
the limit of my agency.
Or I reach the limit of mycapability and then I put
my hand up and I say, Hey,you need to give me more
agency to go do this thing.
If you want me to finish thejob, here's where I'm at.

(33:43):
You're good withme going forward.
Or, you know, Hey, Ican't figure this out.
I just need another perspectiveor I'm out of bandwidth.
I need you need to send mesomeone else to help me because
I can't get this done on my own.
But that's howhumans work together.
I think that's the kind ofautonomy that we need to
build so that that yeah.
In uncertain environments,or, you know, as a potential
for, you know, encounteringedge cases or novel cases

(34:06):
that an autonomy can augmenta human versus, you know,
if they're so fragile.
Then they're just going toconsume all the QIC cognition
and trying to supervise it.
So I had this, if anybodydrives a Tesla, I'm driving
down the, the freeway and I getfrustrated at it for yelling
at me for not looking at theroad enough, but I'm like, this
car is capable enough to safelyexecute, you know, the, the

(34:29):
task at hand without me havingto stare down the road because
I can see what I need to seeon the screen and I'm geeking
out on the energy consumption,but I don't, so I don't,
it's a policy problem there.
Right.
But then when I drive itall, you know, in a, in a,
in a neighborhood with no, nolines and no curbs and, you
know, trees randomly there.
And then it's more workfor me to supervise its

(34:50):
physical execute executionof the driving task.
It is for me just todrive it myself, right?
And when you havethose mismatches,
it's not super useful.
And so for us, we really,you know, as a, as a Air
Force one, I think we, youknow, this is my view, right?
This is my, my, you know,thinking is our design
principle needs to be themaximization of human cognition.

(35:12):
Uh, and then youbuild out capability.
From there, versus I wantto take a human completely
out of this thing and go doand let it go do its thing.
And I think, you know, thatthat's, that's really, really
difficult and more likely togenerate either undesirable
outcomes or mission failure.

Rajiv Parikh (35:28):
That's really helpful to understand.
Um, so.
The Department of the Air Forceor DAF Stanford AI Studio is
working to ensure that theDAF has enough personnel with
PhDs in AI and related fields.
So considering the strategicimportance of AI for national
security, what uniqueopportunities does the DAF
Stanford AI Studio offer thatmay attract talent to that

(35:52):
may attract talent seekingto make a meaningful impact
beyond commercial applications?
How do you address the valueproposition for AI experts?

Jason Hansberger (36:01):
I think the, yeah, so maybe there's, there's
two audiences for this one.
You know, one is air force andspace force, you know, airmen
and guardians who are outand, you know, really want to
be connected to getting intoStanford for a PhD is really,
really, really, really hard.
Oh yeah.
Um, my son

Rajiv Parikh (36:19):
is applying.
It's hard.

Jason Hansberger (36:21):
Uh, but you know, I think us having a
presence here helps reduce someof the risks that a professor
would consider when takingon a, an air force student.
You know, so there's there'stwo hurdles to clear for
an air force or guardian.
That's for the air for theair force to approve for
you to go study for a PhD.
And then the even harderthing of getting an elite

(36:41):
research institution toaccept you for for admittance.
And so, but I think that havingthe studio here because of
those personal relationships andconnections to some of the P.
I.
S.
Who are, you know,who are going to it.
The principal investigators orprofessors, sorry, you know, for
the professor to say, this isa, you know, this is a person
who I think can successfullynavigate this very, very

(37:02):
challenging PhD program in3 years, because that's what
the Air Force requires is a 3year PhD, sometimes expandable
to like 3, 3 and a half.
But this, you know, whatmost people are doing in 5
years, we're, you know, we'reasking our airmen to do in 3.
And so by connecting tothe studio as an airman now
before you apply, you havean opportunity to do some
research and show some ofyour research chops and make
some of the connections,uh, with the people who make

(37:23):
decisions about it, aboutadmissions, or at least make
recommendations for admissions.
And so I think it definitelyincreases the probability
of your acceptance, althoughthat's about you and your
ability and do you have what ittakes from a, from a civilian
side, you know, if you're.
A Ph.
D.
student at Stanford or you'rea professor at Stanford and
you want an opportunity tohave your research field

(37:46):
that immediately on physicalassets through our partnership
with like the test center.
This is a really, reallyexciting way for you to
prove your research in thereal world on a timeline.
That would be really difficultto do otherwise, because
not everybody has airplanes.
That they can dedicatetowards testing profiles.
So I found that a lot ofprofessors, you know, become

(38:08):
excited at the prospect of beingable to not only do applicable
and useful research, but thenvalidate it in the real world on
a really accelerated timeline.

Rajiv Parikh (38:21):
Well, that's super cool.
So three years, that'svery fast for a PhD.
I love it.
Um, and it's an exciting wayto get your project funded
and get it in an elite place.
So it's really amazing.
Um, so, Tim, I havethis question for you.
Uh, and I think you've beenalluding to it and we want to
really understand this, right?
One of the reasons youwrote the book is to
really think about, Okay.

(38:41):
Help us think practically aboutwhere these things are going.
So as AI systems become moreprevalent in military decision
making, um, you know, youthink about, here's a question.
It's, it's more in the minds ofa lot of our listeners, right?
We always keep thinking wayout and we say, Oh, it's
like Terminator or Skynetor something like that.
And that kind of situation.
So how do we make surethat humans remain the

(39:02):
ultimate decision makers?
How do we maintainaccountability for actions
taken by AI systems?
And as a society, how do weensure that the development
and deployment of theseweapons aligned with our
values and international norms?

Tim Henderson (39:16):
So, first of all, I'll say that
I believe that the U.
S.
military currently is You lookback through history, you look
over geography right now, the U.
S.
military is probably the oneof the most ethical militaries
that's ever existed in thehistory of humanity right now.

(39:37):
Right?
So, so you, you saw itin Iraq and Afghanistan.
You know, we, we.
Uh, had significant,uh, civilian casualties,
unfortunately, but we alsotook enormous pains and went
through elaborate protocols toavoid that as much as possible.
And, and you're also seeingthat right now in our, what I

(40:01):
will say is our reluctance touse highly autonomous weapons.
Now, I think.
And I, I feel quitestrong about this.
I think that eventually thatreluctance will go away.
Okay.
It will go away entirely.
Uh, highly autonomous weaponsor HAWs as I call them, weapons,
which will navigate over largegeographies in long periods

(40:24):
of time and kill with completeautonomy will become the sine
qua non of great power conflict.
Maybe not against.
The Taliban, but a great powerconflict against Russia, China,
North Korea, Iran, uh, withinthe next 30 years, uh, and that
will happen across all arenas.
So, we're seeing that rightnow in the Ukraine, primarily

(40:48):
in aerial warfare, but alsoin, uh, sea surface warfare
and ground warfare, but itwill also expand to space,
cyber warfare, psychological.
And I think what will happen iswe will build these capabilities
slowly, and then it could be,you know, a decade to 3 decades

(41:08):
in the future, but the time ofanother 9 11 or another Pearl
Harbor will come and, you know,sort of the nuanced perspective.
these reservationswill evaporate.
Okay.
That's my belief.
And if you think back in historyto World War II, for example, I

(41:29):
think there's a very instructivehistorical fact, which is prior
to Pearl Harbor, unrestrictedsubmarine warfare, uh, you
know, uh, submarines targetingmerchant ships was illegal.
And the U.
S.
was a signature towards notone, but two different, uh,
treaties making that illegal.

(41:51):
So, so you, you could notsimply shoot an oiler or a,
you know, a merchant ship, aJapanese merchant ship, uh,
without first evacuating thecrew to a position of safety.
Um, and within six hours afterPearl Harbor, the uniformed
officers in the Pentagon andin the, um, the war department,

(42:15):
uh, said conduct unrestrictedsubmarine warfare against Japan.
And there's no evidence thatthey, that they consulted.
Civilian authority over them.
Um, and so it's a veryinteresting historical example
of how when your back isagainst the wall, things change
and they change very quickly.
And I think another greatexample of that is what we're

(42:36):
seeing right now in Ukraine.
Ukraine has made enormousstrides in autonomous warfare.
Uh, in ways that, you know,prior to Putin invading, uh,
you know, several years ago,that I don't think anyone
would, would, would havepredicted the effectiveness of.
drone warfare in aerial,in surface, in sea

(43:01):
surface, in ground warfare.
Just, I think it was eight,nine days ago, basically
the Ukrainians had a, whatI would call an autonomous
air ground assault platoon.
And now these weapons werenot completely autonomous.
They were first person view.
Weapons, right?

(43:21):
But they, they attackedwith no human, um, no
humans attacking with them.
Right?
And so I can go through whyI believe in this, you know,
scenic when on hypothesis,and you look, look, I, I.
Put out 30 years as the horizon.
It could be 20 years.

(43:42):
It could be 40 years.
You know, it's hard to saywith, with precision, right?

Rajiv Parikh (43:47):
Yeah.
So you think like, basicallyonce, once you're, you're,
you're attacked, a lotof these rules that you
have kind of go out, youknow, go out the window.
Right.
And, you know, uh,at the same time.
Potentially, there are somethings that we do, we don't
use nuclear weapons, we don'tuse biological weapons, you
know, for the most part, wedon't use chemical weapons, um,

(44:10):
I understand there's certaintypes of weapons that can
blind people, uh, in battlethat are not being used.
I wonder if, if there is.
You know, just because youhave it, you have to use
it or maybe it'll backfire.
It'll boomerang on us.
I don't know.
Jason, if you have a, if youhave a thought on this one,
um, you guys have studiedthis way more than I have.

Jason Hansberger (44:29):
Yeah, I think, uh, international
relations theory, you know,would definitely point to
norms changing under, youknow, extraordinary pressure.
And so the, maybe a way tothink of it is that values
or restraints, uh, thatlimit your effectiveness.
Can become luxuries thatyou can't afford When

(44:51):
you're contemplating yourown existence, and so you
will see rapidly evolvingnorms, rules, policies,
even laws in proportionto the threat that exists.

Tim Henderson (45:05):
So, Rajiv back 2 points 1 on the, the.
Difference between nuclear,biological and chemical weapons
and highly autonomous weapons.
They're, they'requite different.
Um, and there's a, there'sa book called army of non
by a center for new Americansecurity analyst, uh, Paul

(45:27):
char that, uh, does a reallynice job of analyzing this.
It basically says, look,it'd be a big mistake.
To look at the relativesuccess that we've had at
preventing proliferation of NBCweapons to autonomous weapons
for the following reasons.
1, they have enormousmilitary utility, which
nuclear weapons don'treally have in in that now,

(45:48):
obviously, they're extremelydestructive, but in that they
also have enormous stigma.
And so people have not usedthem since 1945, right?
Okay.
2, they're not clearlydefined, right?
The UN still does after.
14 years of goingback and forth.
They still don'thave a definition.
Um, they lack ahorrific nature, right?
NBC weapons have prettyclearly defined a horrific

(46:13):
nature for the autonomousweapons are often transparent.
Like, if a predator droneReaper drone launches a hellfire
missile, you can't Uh, ifyou're on the ground or, you
know, you can't tell whetheran algorithm authorized that
launch or whether a human did.
Right.
And 5, they're the ultimatedual use technology.

(46:35):
Right.
So they can be soautonomous vehicles.
You know, Waymo, whatever,whatever company algorithm,
right, gets developed, andeventually that algorithm
finds its way to controltanks or armored personnel
carriers or, or not, uh,tanks or other, uh, autonomous

(46:56):
military vehicles through.
You know, espionage through

Rajiv Parikh (47:01):
somehow, somehow technology gets out, right?
I mean, it's, it's always goingto be, or it already might
be developed the other way.
And it's coming, coming, comingone way or the other way.
We don't, we don'tknow for sure.
Right.
So you're right.
I mean, so there's one is it's asuper high cost where the other
one it's where AI and autonomyis more incremental and has
common usage and dual use usage.
So it can seep in.

(47:22):
And in a way we do have.
Uh, wars in the sense thatwe have cyber wars and
because they're quote,unquote, low cost, they're
happening all the time, right?
Um, there, there's allkinds of things that are
happening that are low cost,quote, unquote, low costs.
And so there's an argument.
The argument is that, whichI think you make really well
as like, well, these sortsof weapons are, um, when they

(47:45):
make sense, we'll use them.
Because they'llhelp us win, right?
Okay.
AI and autonomy presents theability to, uh, enhance various
aspects of the kill chain.
How can the militaryeffectively leverage AI to
improve the speed, accuracy,and effectiveness of this
process while accountingfor the human element?

Jason Hansber (48:08):
Um, yeah, so the.
I don't really use the termkill chain too much because
it's not a great term.
I like to say, um, long rangedecision making in, uh, with,
with imperfect information.
I like that long

Rajiv Parikh (48:25):
range decision making.
You must have an acronymfor that, like you had for
everything all at once.

Jason Hansberger (48:31):
I don't yet.
I probably do need one.
Um, but this, you know, the,the, the challenges faced in, in
long range decision making underimperfect information aren't
unique to, to a kill chain.
I think, you know, there's a lotof research from like where to,
where to pre position your Uber.
Um, you know, so this is avery, very common, common

(48:51):
problem faced by, by anybodytrying to manage, uh, a lot of
elements in a geographicallydiverse space collecting
information, you know, becausewe have sensors and so do,
so do any technology company.
So how do you optimize theseoptimization problems and
then seeking to, you know,make better decisions from it?
Um, You know, and like howto preserve the, the human

(49:15):
element in it, to me, it's,it's ensuring that one, that
all, that, that all youroperations are embedded with,
uh, clearly well thought outvalues driven constraints and
restraints, because even ifwe say like, oh, this, we're
not there, but let's saythere is this fully autonomous

(49:35):
thing that you can send outon missions, you're nothing's
really fully autonomous.
Right because at some pointit's it's being given a
mission directive from ahuman um, and then within
that mission directive you'resaying These are the tasks
you have to accomplish.
These are the these are thedecision parameters that must
be met Um, and then there'ssomebody building those,

(49:56):
you know, so there's there'slots of humanness within
all this building, um untilthe Autonomy spawns itself
and you know Runs its owncountry and things like that.
Like these things reflectthe humans who built them.
Uh, and so I think, you know,making sure that that from
the very start of the creationof any kind of autonomous

(50:18):
system, you're embedding itwith the kinds of values that
are important to us, thatthat's where you have to start.
And then based on policyand values and the situation
you need to Put gates atdecision making to ensure
that there is a human thereto understand it and to to
make the make the decision.

Rajiv Parikh (50:38):
So Tim, do you think that AI can AI will
eventually be able to doits own mission directives?
And if so, Whatshould we do about it?

Tim Henderson (50:47):
I agree with Jason in that, you know,
uh, a human, you know, in mytimeframe of 30, 40 years,
a human still is sort ofsetting the parameters, right?
Like, you know, this isthe commander's intent.
But my definition of acompletely autonomous weapon is

(51:08):
one, again, one that navigatesover large geographies,
long periods of time, killswith complete autonomy.
You know, no one's partic,no one's picking a particular
target that you know, I want youto kill this particular tank.
It's okay.
I want you to go out overthis, you know, a thousand
square kilometers over the nextthree days and kill any enemy

(51:29):
vehicle or troop concentration.
You see, in thattime, that's what I.
Uh, term a completely autonomousweapon, and I think they'll
be very effective acrossthose all those arenas, you
know, and I'll tick throughthese reasons quickly.
And if you have a question on1 or 2 of them, let me know
what 1 is this reaction time.
Okay.
Whether it's a.

(51:50):
Uh, Navy phalanx, uh, uh,co collaborative combat
aircraft, uh, a bullfroganti drone machine gun.
Two is they operatewithout fear and fatigue.
Three is, they operate largelycan be engineered to function
largely a perviously to G-Force.
Cold heat altitude, they'relargely impervious to

(52:11):
biological, chemical, andradiological weapons have
enormous cost advantages.
You strip out the armorplating, you strip out
the human user interface.
They can operate with a highdegree of plausible deniability.
Uh, what, so that's, youknow, Russia's little green
man or the Gary Powers flightthat they can, uh, be a
corporate autonomy could beincorporated into less than

(52:31):
the lethal weapons, like, youknow, a rubber bullet machine
guns or tear gas dispensers.
And I think that's.
important in that, uh, youknow, Tiananmen Square protest
worked at least temporarilybecause the Chinese army was
willing, you know, the tank,the famous picture of the

(52:52):
tank man in Tiananmen Squarethat, you know, that tank
commander didn't want to runover that civilian, right?
A, uh, an autonomous version canbe programmed to do that without
You don't, so in other words,

Rajiv Parikh (53:07):
right, but someone has to control it though.
Someone has to say, no,no, no, but, but the has
to give it that objective.

Tim Henderson (53:11):
But, but what I'm saying is the dictator,
the authoritarian power nolonger needs to convince
some significant subsetof the police and army.
To repress the population.

Rajiv Parikh (53:22):
That's right.
So like, as you're in a protestand you tell the police to
go and do, you know, to harmthe crowd and you hope the
crowd, you hope, you hopethe police think of their
own families before they doit, but they don't, right.
Some don't, some do, right.
And you're saying that thatpossibility exists, whereas
with an autonomous weapon,it may not have that decision

(53:42):
making fail safe in a way.
So, but I, I, I mean, Idon't know about you, but I
still feel like we're stillgoing to have to hold back.
The person accountable, theone who sent that system in,

Tim Henderson (53:53):
well, of course, I mean, you would hope that,
that in our internationalrules based world order
that that would happen, but,

Rajiv Parikh (54:02):
but the potential exists.
So the fail safes hardthere to the same extent.

Tim Henderson (54:07):
Right?
So, you know, it.
In a nutshell, I don't knowthat we have time to fully
explore this, but in anutshell, I think that, again,
highly autonomous weapons arevery interesting subset of
technology because the majorityof the people on the planet
do not want them to happen.
But they will absolutely happenwithin the next 30 years.

(54:30):
And so what does that meanabout the relationship of
humanity and technology?
And so then that you canuse that as kind of a
crowbar to kind of pryopen that by that box.
And I think what youfind is that humanity
doesn't put a lot of.
Or really any restrictions ondeveloping technology technology

(54:53):
gets developed and if it'scontroversial, sometimes it gets
developed in secret and then webackfill, you know, the morality
around it and in a world ofnuclear weapons, engineered
pandemics, climate change, do wehave, you know, are we reaching
a point where those technologiesare operating so quickly.

(55:16):
That we cannot fix the problemsthat they create, you know, I
think humanity has 2 options.
1 is what I wouldcall more tech.
The other ones changehuman behavior.
Right?
And what more tech lookslike is more technology.
So, for example, nuclearweapons, we, we come up with
Reagan's Star Wars, right?

(55:36):
Climate change.
We.
Uh, eject atmospheric sulfate,uh, to reflect more sunlight.
Um, uh, but again, in a worldof hypersonic weapons of,
you know, uh, torpedoes withnuclear warheads, uh, with
in a, uh, age of potentiallya point of no return on, on

(55:58):
climate change, you know, whichis a lot of people think is
2 degrees Celsius above preindustrial temperature levels.
The, the more tech thing,You know, it doesn't work
because you don't have enoughtime to deploy the solution.

Rajiv Parikh (56:21):
Jason and Tim, welcome to the Spark Tank, where
military strategists dissectthe future of high tech warfare.
This is where two seasonedofficers, one former officer,
one current officer, withdeep experience in national
security strategy and militaryinnovation, are forced to
leverage their expertise andengage in a high stakes debate.

(56:42):
They're not just discussingthe theory of AI and autonomy.
They've been to the frontlines of implementing these
technologies in the military.
They dissected the complexitiesof integrating AI into
the military, confrontedthe ethical dilemmas of
autonomous weapons andexplored the talent landscape
required to make it happen.
Now it's time for fun.
This is the ultimatemilitary innovation showdown

(57:03):
where only one strategicvision can prevail.
So basically this gameis two truths and a lie.
I'm going to readyou three statements.
Two are true, one is a lie.
You've got to pick outwhich one is the lie.
I'm going to count down to, I'mgoing to do three, two, one.
You're both going to showyour fingers, whichever
three, three, two, or one.

(57:23):
So you can't cheatoff each other.
Not that you ever would do that.
You're good, good,uh, uh, military men.
So I'm, I'm assuming you'llbe totally ethical with this.
So I'm, so I'm going toread you three things.
You're going to saywhich one the lie is.
I hope you know youreighties history, uh, pop
culture and autonomousvehicles is our subject.

(57:44):
So I hope you guys are ready.
Here's round one in the 1989Batman film, the Batmobile
feature featuring autonomousdriving capabilities, allowing
Batman to summon it remotely.
Number two, in the TVseries Knight Rider, Kit
could transform into asubmarine and navigate

(58:04):
underwater autonomously.
Number three, in 1993 filmDemolition Man, accurately
predicted video callingtablets and autonomous
vehicles with voice control.
Okay, so number one's Batman,you can summon it autonomously.
Number two is Kit goingunderwater, and number three
was the Demolition Man talkingabout all the stuff we do today.

(58:28):
All right, three, two, one.
All right, I have Tim atthree and Jason at two.
So the winner ofthis round is Jason.
While Kit had many advancedfeatures, underwater
transformation was one of them.
The other two are truedepictions from their

(58:49):
respective films.

Jason Hansberger (58:51):
I just never remember Kit being near
the water, so I had to go.
That's right.
I think that's a good catch.
I think it's a lot moredust and dirt and donuts

Rajiv Parikh (59:00):
than jumping in water.
That's right.
I don't remember ithaving that kind of, like,
underwater cheaty cheaty bang.
Anyway, um, alright, round two.
Number one, the James Bond filmSkyfall from 2012 showcased
an autonomous Aston MartinDB5 that could drive itself
to Bond's exact locationusing satellite tracking.

(59:25):
Number two, the 1990 film TotalRecall featured an AI powered
taxi called Johnny Cab witha robotic driver that could
engage in sarcastic banter.
Number three in JurassicPark, which is 1993,
self-driving Ford Explorersguided visitors through the

(59:47):
park on electrified tracks.
Okay, you ready?
3, 2, 1.
All right.
The, so Jason is three.
Tim is one.
And the winner is Tim!So we have a tie.

Jason Hansberger (01:00:06):
Rajiv, those Ford Explorers
were not electrified.

Rajiv Parikh (01:00:12):
You

Jason Hansberger (01:00:12):
got

Rajiv Parikh (01:00:12):
you there.
No, they went through thepark on electrified tracks.

Jason Hansberger (01:00:17):
They did go through the park on tracks, but
the tracks were not electrified.
It was like the, you know,they let you All right.

Rajiv Parikh (01:00:23):
So this is a protest.
I'm going to have to talk tomy, I'm going to have to talk
to my committee on this one.

Tim Henderson (01:00:30):
I'll accept, I'll accept it as a wrong.
I will cede that to Jason.
I think he's right about that.
We'll

Rajiv Parikh (01:00:36):
have an extra in case we come to
a, to an impasse here.
Okay.
Number one was a lie.
While Bond films oftenfeature advanced car
technology, Skyfall did nothave a fully autonomous DB5.
The other two are accuratedepictions from their film.
Other than the protest, uh,very detailed protest here.

(01:00:56):
All right.
I like that.
I'm glad you you're, you're,you're so precise about it.
Number three, the 2004 filmI robot featured a shape
shifting autonomous carthat could transform into a
submarine for underwater chases.
Number two, the 2002 filmMinority Report depicted

(01:01:17):
autonomous cars running onvertical roads in a, in a
ruthlessly efficient butterrifying traffic system.
Number three, in the animeseries Ghost in the Shell, AI
powered tanks called Tachycomascould engage in philosophical
debates while in combat.
So number one, iRobotshapeshifting autonomous

(01:01:40):
car could transform into asubmarine for underwater chases.
Number two, MinorityReport autonomous cars
running on vertical roads.
Number three, Ghost in theShell anime series, AI powered
tanks called Tachycomascould have philosophical
debates while in combat.
So you're ready?
Three.
One.

(01:02:02):
Oh, we got a split here.
I like this.
Okay.
So Jason did a three Timsaid, number one, the
winner of this round is Tim.
Wow.
So he's watched a lot of oldmovies, but while I robot
did feature an autonomousAudi RSQ, it couldn't

(01:02:23):
transform to go underwater.
The other two are accuratedepictions from there.
So I'm going to basicallysay, because one was under
protest, we're going todo a round for, all right,
all right, here we go.
We have around four.
Usually we only,we finish at three.
We're going to do from fourtoday, which is more fun.
Cause then I, the, my writersget more credit for all

(01:02:44):
the great work they do andthey have more party, more
party things you can do.
All right.
Number one in the TV series,Westworld autonomous horses
with AI capabilities were usedas a mode of transportation
within the theme park.
Number two.
The 1983 film, Christinefeatured a sentient

(01:03:05):
autonomous car that couldregenerate itself and hunt
down its owner's enemies.
Number three, in back to the, inthe back of the future trilogy.
Doc Brown's DeLorean timemachine had an AI assistant
named WO Par that couldpilot the car through
different time periods.
Three, two, one.

(01:03:29):
All right.
All three of you pick three

Jason Hansb (01:03:31):
because I mean, Dr.
Marty had to drive the DeLorean.
That's right.
Yeah.

Rajiv Parikh (01:03:37):
The DeLorean and back in the future didn't
have any is this a nameWhopper obviously because I
pronounced it incorrectly.
Basically know my age.
It wasn't true.
It did have a

Jason Hansberger (01:03:48):
Mr.

Rajiv Parikh (01:03:48):
Fusion.
It did have a Mr.
Fusion.
That's right.
That's right.
All right.
So I have youbasically still tied.
Uh, do you want to run five?
This is it.

Jason Hansbe (01:04:00):
I got a fifth one.
Sure.

Rajiv Parikh (01:04:01):
I got a fifth one.
We're going for the win.
Okay.
Number one, the 2014 film.
Big Hero 6 featured aninflatable healthcare robot
named Baymax that couldtransform into a high speed
autonomous flying vehicle.
Number two in the animatedseries, the Jetsons.

(01:04:22):
George Jetson's flyingcar could fold into a
briefcase when not in use.
Number three.
In the Transformers franchise,the Autobot Bumblebee can
disguise himself as a yellowVolkswagen Beetle and drive
autonomously while communicatingthrough radio clips.
3, 2, 1.

(01:04:45):
Ooh, good.
I actually could have an answerif one of you guys got it right!
But you didn't.
It was number 2.

Jason Hansberger (01:04:55):
But did you say the Transformers movie
or the Transformers cartoon?

Rajiv Parikh (01:04:58):
Franchise.

Jason Hansberger (01:04:59):
Yeah.
I'm assuming it's a

Rajiv Parikh (01:05:01):
movie.

Jason Hansberger (01:05:02):
In the movie, he's the Camaro.
GM sponsored that.
Is he a

Rajiv Parikh (01:05:05):
Camaro?
No, but I thoughthe was a beetle.

Jason Hansberger (01:05:07):
He is in the cartoon.

Rajiv Parikh (01:05:08):
Okay, this is the cartoon.
Jason will correct us.
I'm

Jason Hansber (01:05:12):
such a nitpicker.
I love it.
I

Rajiv Parikh (01:05:13):
love that you're a nitpicker.

Jason (01:05:15):
We're, we're ending in a

Rajiv Parikh (01:05:17):
pseudo tie.
Uh, while the Jetsons didnot have, did have flying
cars, they didn't fold intobriefcases, uh, the other
ones are apparently true.
You both did a terrific job.
You actually, basically,depending on how you count
it, got two or three.
That is very unusualfor this game.
So kudos to both of you.
All right.
I have a couple partingquestions and, um, we're

(01:05:39):
going to do this really fast.
Uh, this is what we callour fast final four.
Here we go.
And I might just, just haveone of you answer each.
Your life path has ledyou to meet, interview, do
business with people acrossindustry, government, academia.
So you've observeda lot of different
cultures in action too.
In your view, what makes thebiggest difference in terms

(01:06:00):
of organizational culturethat leads to success?

Jason Hansberger (01:06:03):
Oh, it's mutual trust.
I think that's that'slike mutual trust and
empathy understanding eachother's point of view.
That's That's an easy one.
What

Rajiv Parikh (01:06:10):
did it work best for you?

Jason Hansberger (01:06:11):
Uh, the The when I was at the first airlift
squadron so, you know, they wefly the vice president and other
cabinet secretaries and there'sa Samford there's sam fox is how
we describe the the culture andit's best kind of articulated
in a single sentence, which isI do everything I do doing my
best knowing that the personnext to me is doing the same.

(01:06:34):
Uh, that's a that Secondelement is very very empowering.
Uh and allows kind of uh, apure permission to be excellent
Uh, you have to have that oryou know, if you have pure
permission to be cynical we'veall been part of those places.
Um, no one's great.
But if you have pure permissionto be excellent, we celebrate

(01:06:55):
excellence for each other.
Then, then you can accomplishsome pretty amazing things.
But it all startsin mutual trust.

Rajiv Parikh (01:07:01):
I love that.
All right, we're goingto quote that after this.
Okay, Tim, what's yourpersonal moonshot?

Tim Henderson (01:07:08):
I'd like to publish not only this novel,
but, uh, I have actually along list of other novels I'd
like to get published as well.

Rajiv Parikh (01:07:16):
All right.
I enjoyed the novel.
For me, it was, uh, it waslike a combination Crichton
and Clancy novel, so Ilearned a lot as I read.
I had to read, I hadto go, I had to look
up a bunch of things.
It was really fun.
Here's a questionfor you, Jason.
Do you have a favorite lifemotto that you come back to
often and share with friends,either at work or in life?

Jason Hansberger (01:07:39):
Yeah, I have this motto.
Um, I shouldn't say I, it's we.
My wife and I have a mottothat, uh, you should seek
significance over success.
To me, success is inwarddefined and backwards looking.
And if you're trying to besuccessful, then you can't say,
I'm successful because, youknow, next year I've done this.
You know, I'll do this thing.
It's I'm successful because Igot to this place that I defined

(01:07:59):
as a place I wanted to be.
But lots of eyes to me inlife, you're going to face
choices where you can tryto be successful or you
can try to be significant.
Um, and significance to meis, uh, is outward defined
and forward looking.
So is this podcast, was thisa thing of significance?
It really depends on, for,you know, Rajiv and Tim and,

(01:08:20):
and, and listeners and, andwhat, what we said, if it's
something useful to them andthey take it forward and they
bring it to other people.
And so significance is definedby the people around us.
And it's, it's, it's.
Uh, defined by the future.
And so when it comes, you know,to make decisions in life, if
you're trying to lead a lifeof significance to do things
of significance, uh, it'll pushyou in directions, I think, that
are, that are pretty rewarding.

Rajiv Parikh (01:08:39):
That's really awesome.
That's a great line.
Um, and I should just endright there, but I'm not going
to, I'm going to keep going.
What's the next thing youplan on starting, Tim?

Tim Henderson (01:08:51):
I want to go back for a second to, uh, the
novel that I wrote becauseI, I think it's an important,
um, is why I wrote that novel.
And two, there's two thingscompel me to write that novel.
One is an enormous admiration.
For everyone thatserves in our military.

(01:09:12):
So I was a submarine officer.
My father was a careerinfantry officer, company
commander in Vietnam.
My, my son is a Rangerqualified first Lieutenant
in the 82nd Airborne.
So I, you know, I, I.
So I would love for my novelto be compared even remotely
to the novel by Anton Meyer,Once an Eagle, and it's a,

(01:09:36):
it's a classic, and I stronglyrecommend that everyone read
it in addition to the novels.
Uh, I am the, uh, in chargeof programs for the Military
Officers Association of America,the Silicon Valley chapter.
And I'm, uh, hopefullygoing to recruit Jason to
come speak to our group.
Uh, but in addition to, youknow, uh, a long list of

(01:09:59):
speakers I'd like to attract,I'd like to, I, I, I, I wrote
a, A paper, uh, with one ofmy Naval Academy classmates
called in which we advocateda national defense auxiliary,
and we'd submitted it tothe joint forces quarterly.
Um, and we still, it didn'tget picked up by JFQ, but,

(01:10:22):
um, but it got very close andwe're hoping to tweak it and
get it published somewhere.
And the idea is,is basically that.
Uh, we have far many moreveterans than we actually
do active duty people.
And you look at, uh, countrieslike Ukraine and Israel and
a large part of their successand their defense against.

(01:10:44):
you know, vastly superiorenemies is the way that they
harness the whole of theirnation to, to, uh, contribute
to their military, uh, defense.
And so I think there's alot of things that veterans
can do to help active dutyunits, uh, everything from
technology acquisition to, uh,recruiting, you know, the, or.

(01:11:07):
Services have fallen shortof recruiting goals, uh, most
years in the past 5 years.
Um, and so I think it could bean incredible force multiplier
for basically at no cost.
And so I'd like to try to.
Be a small part ofmaking that happen.

Rajiv Parikh (01:11:25):
All right.
Great sense of purpose.
I'd say great sense of purposeand a great level of service.
And I appreciate both of youfor being here and engaging
in this discussion and debate.
And I think it's.
Super thought provokingand helpful, uh, where you,
you know, the intention isto protect Americans and

(01:11:46):
protect, uh, uh, those aroundthe world that have, want to
live in a rules based world.
And, um, I appreciate bothof you for being here and
talking about your pointsof view and programs that
you're creating and how you'retrying to make the world a
better place, so thank youso much for coming today.

Jason Hansber (01:12:02):
Thank you, Rajiv.
It's been a pleasure.

Tim Hender (01:12:04):
Yeah, thanks, Rajiv.
And, and Jason, uh, you as well.
I've learned a lot.
Advertise With Us

Popular Podcasts

Dateline NBC

Dateline NBC

Current and classic episodes, featuring compelling true-crime mysteries, powerful documentaries and in-depth investigations. Follow now to get the latest episodes of Dateline NBC completely free, or subscribe to Dateline Premium for ad-free listening and exclusive bonus content: DatelinePremium.com

24/7 News: The Latest

24/7 News: The Latest

The latest news in 4 minutes updated every hour, every day.

Therapy Gecko

Therapy Gecko

An unlicensed lizard psychologist travels the universe talking to strangers about absolutely nothing. TO CALL THE GECKO: follow me on https://www.twitch.tv/lyleforever to get a notification for when I am taking calls. I am usually live Mondays, Wednesdays, and Fridays but lately a lot of other times too. I am a gecko.

Music, radio and podcasts, all free. Listen online or download the iHeart App.

Connect

© 2025 iHeartMedia, Inc.