All Episodes

June 27, 2025 45 mins

Can AI-driven autonomy reduce harm, or does it risk dehumanizing decision-making? In this “AI Hot Takes & Debates” series episode, Daniel and Chris dive deep into the ethical crossroads of AI, autonomy, and military applications. They trade perspectives on ethics, precision, responsibility, and whether machines should ever be trusted with life-or-death decisions. It’s a spirited back-and-forth that tackles the big questions behind real-world AI.

Featuring:

Links:

Sponsors:

  • Outshift by Cisco: AGNTCY is an open source collective building the Internet of Agents. It's a collaboration layer where AI agents can communicate, discover each other, and work across frameworks. For developers, this means standardized agent discovery tools, seamless protocols for inter-agent communication, and modular components to compose and scale multi-agent workflows.
Mark as Played
Transcript

Episode Transcript

Available transcripts are automatically generated. Complete accuracy is not guaranteed.
Jerod (00:04):
Welcome to the Practical AI podcast, where we break down
the real world applications ofartificial intelligence and how
it's shaping the way we live,work, and create. Our goal is to
help make AI technologypractical, productive, and
accessible to everyone. Whetheryou're a developer, business
leader, or just curious aboutthe tech behind the buzz, you're

(00:24):
in the right place. Be sure toconnect with us on LinkedIn, X,
or Blue Sky to stay up to datewith episode drops, behind the
scenes content, and AI insights.You can learn more at
practicalai.fm.
Now, onto the show.

Daniel (00:49):
Welcome to another episode of the Practical AI
Podcast. This week, it's justChris and I. We're joining you
for what we call a fullyconnected episode, which is one
without a guest, where Chris andI just talk about a certain
topic or explore somethingthat's of interest to us or that

(01:10):
we've seen in the news or helpkind of deep dive into a
learning related resource. I'mDaniel Witenack. I'm CEO at
Prediction Guard, and I'm joinedas always by my cohost, Chris
Benson, who is a principal AIresearch engineer at Lockheed
Martin.
How you doing Chris?

Chris (01:28):
Hey Daniel, I'm doing fine. Looking forward to today.
We have something fun planned

Daniel (01:32):
This should be fun. Yeah, so for our listeners,
we've been in the background ofthe show really talking about,
well, a lot of things about thefuture of the show. Some things
in terms of, you know, thingslike rebranding and updating our
album art, which you'll seesoon, new intros, outros, but

(01:55):
also new kind of focus areas forcontent on the show, really
being intentional about some ofthe things that we're exploring.
But if you've been listening tous for a while, you also know
that Chris and I like to havefun together and explore things
in interesting ways, just thetwo of us. Some of our team here

(02:17):
came up with the idea, well,what if we took some shows and
had a kind of AI hot takes anddebates type of show?
This is the first iteration ofwhatever we'll end up calling
this AI hot takes and debates,that's a good one. But basically

(02:38):
the idea here is that there's atopic which is part of the wider
conversation around AI, peopleare divided on. And we will take
some of the arguments for oneside and the other side. And it
really doesn't matter whateither Chris or I actually think

(03:00):
on the subject. Maybe some ofthat will come out in the
discussions.
But really where Chris, you takeone side and express some of
those arguments for the oneside. I take the other side,
express some of those argumentsand we discuss it together,
hopefully exposing the audienceto both sides of a topic. Also

(03:21):
it sounds like fun. What do youthink, Chris?

Chris (03:24):
I think it sounds like a lot of fun. And of course, as we
were coming up with the firsttopic, you might say I throw a
hand grenade into the mix.

Daniel (03:31):
Good one, good one.

Chris (03:32):
There you go. And that it's a topic of particular
interest to both of us,especially to me. But it's
important that I say we're gonnatalk a little bit about autonomy
in warfare and in some areasoutside of warfare. But I really
wanna point out, I really wannaemphasize that, you know, Daniel
and I don't necessarily takehard side on either topic. We're

(03:53):
we're kind of it's an assignmentto take one side or another.
But I also wanna emphasizebecause I work for Lockheed
Martin, who's a defensecontractor that I'm definitely
not in any way representing aLockheed Martin perspective.
This is just a fun debate issuethat we're having today. So I I
don't often have to dodisclaimers, but I I felt given
the topic that was importanttoday.

Daniel (04:14):
Yeah. Well, and I I was gonna say autonomy means a lot
of things.

Chris (04:19):
It does.

Daniel (04:19):
And it can be applied in a lot of industries. And so I
think some of the arguments thatwe're gonna talk about here are
equally applicable to autonomyin, you know, self driving cars,
you know, airplanes,surveillance systems, like all
of all of these sorts of thingswhere, you know, manufacturing,
whatever it is, there aresimilar concerns to those that

(04:43):
we're talking about. Obviously,if you have autonomous weapon
systems, there's a particularkind of life and death element
to it, and it intersects withconflict around the world. In
the YouTube videos I've watchedof interesting interesting
debates, I I'm always I'velearned over time that you you
should frame this sort of thingas a question or frame it as a

(05:07):
Not a question, I should say, astatement, one side taking the
affirmative, one side taking thenegative side of that. And so
the statement, if you will, isautonomy within weapons systems
or military applications is anoverall positive thing for the

(05:33):
development of the world andsafety and resolving conflict
all of those things.
So Because Chris is maybe alittle bit closer to this, I'm
going to take the affirmativeside of that, which is, hey,
autonomy could actually providesome benefit and more safety and

(05:54):
less loss of human life, moreethical sort of application. And
Chris is gonna take the oppositeside of that arguing that we
shouldn't have autonomy in thesesorts of systems. Does that make
sense, Chris? Did I explain thatin a reasonable way? I'm not
much of a debater.

(06:15):
So

Chris (06:16):
No. That sounds fine. We will go with that. I wanted to
mention to the audience, it wasactually someone who kinda got
this into my head a little bit,but we threw a post was one of
our former guests on the showback in the beginning of 2,024
on February 20. We had anepisode called leading the

(06:37):
charge on AI and nationalSecurity, and our guest on that
show was retired US Air ForceGeneral Jack Shanahan, who was
the head of Project Maven andthe DOD's Joint AI Center.
He actually founded it. And Ifollow him a lot on LinkedIn,
and he had highlighted twopapers written. One was kind of
a pro autonomous weapon systemspaper that was written by

(07:02):
professor Kevin John Heller, whois professor of international
law and security at theUniversity of Copenhagen Center
for Military Studies. And he'salso a special adviser on war
crimes at the InternationalCriminal Court. And he wrote a a
paper called the concept of thehuman in the critique of
autonomous weapons.

(07:22):
And subsequent to that, after hewrote that, two other
individuals, L. K. Schwartz, whois professor of political theory
at Queen Mary University ofLondon, and Doctor. Neil Renick,
who was also an expert in thisarea. He's a researcher at the
Center for Military Studies atthe Department of Political
Science for the University ofCopenhagen wrote a counter paper

(07:46):
on this.
And that counter paper is calledon the pitfalls of technophilic
reason, a commentary on KevinJohn Heller's, the con the
concept of the human and thecritique of autonomous weapons.
And that was a very recentpaper, May twenty third of of
this year, two thousand twentyfive, and the original paper
written by Heller was12/15/2023. And it just seemed

(08:09):
like an interesting topicalthing to jump into. And like I
said, we're not gonna holdourselves bound by the
strictness of their topics, butmainly just have fun with it.

Daniel (08:19):
Yeah. Yeah. Have fun with it. And and we should say
at least our plan for thesetypes of conversations between
Chris and I are that one of uswould express core arguments of
one of the sides of of thesedebates, But then maybe open it
up and just have some casualconversation about each of those
points, kind of how we thinkabout it, the validity of it,

(08:41):
that sort of thing. Maybe that'sjust because I selfishly don't
wanna come off as a bad debaterand not have a good rebuttal for
my opponent.
But also Chris and I arefriends, so that

Chris (08:55):
makes it And it's all for fun in the end.

Daniel (08:58):
Cool. So again, the kind of hot take or the debate of
today is that autonomy is anoverall positive in terms of
application to autonomousweapons systems or in military
applications. But I think asyou'll see, of these arguments
might be able to be extracted toother autonomous systems like

(09:20):
self driving cars or planes ormaybe surveillance systems or
automation and manufacturing,etcetera. So I think we can
start with the first of theseand I'm happy to throw out the
first kind of claim here. Andreally I'm basing this claim on,
so again, I'm kind of on thisaffirmative side and taking part

(09:44):
of that article by Heller andreally highlighting.
So my claim is that autonomywithin conflict and weapon
systems is or could be positivebecause real human soldiers or
humans that are part of anyprocess are biased, emotional,

(10:05):
and error prone, right? And ifan autonomous system is able to
outperform humans in adhering,for example, to international
humanitarian law, then thatactually minimizes harm of these
systems, right? As opposed tobeing solely reliant on humans

(10:26):
who are biased, again, emotionaland error prone. So part of the
article that motivated thistalks about how decision making
in particularly high impactscenarios like a combat scenario
is distorted by these sort ofcognitive social biases,

(10:47):
negative emotions, actuallimitations of humans in terms
of how much information they canprocess at any given time. And
one of the quotes that I sawfrom there is, very few human
soldiers in a firefight actuallycontemplate the implications of
taking a life or extrapolatingthat to other things, maybe

(11:11):
humans flying planes or humansflying cars or humans doing
manufacturing activities.
They aren't contemplating theimplications of every action
that they're taking. In someways, they're just trying to
survive in those environments orget through the day or deal with
their emotions. So that's kindof the first claim. What's your

(11:31):
thought, Chris?

Chris (11:32):
Well, I think the opposing viewpoint on that is
gonna be really that we humansvalue our ethical and moral
judgments that we make. And weput a lot of stock in that and
that our ability as humans isreally important. So, you know,
the argument against against thefact that, you know, kind of the

(11:53):
fog of war takes over for theindividual, They're not thinking
about higher thought is the factthat if you on the other side is
is we do we do obviously wanthumans in combat to have the
ability to to make moraljudgments and that does happen.
You know, where where you havesomeone thinking, you know,
that's a child out there thathas the weapon and and and I I'm

(12:17):
just not comfortable no matterwhat happens. I'm not
comfortable doing that.
And that's the kind of moraljudgment that we as we humans
really value. And we don'tnecessarily, you know, trust
autonomy to be able to make suchdistinguishments in in the near
future on that. And so, youknow, the notion of taking a

(12:37):
life and death decision out of ahuman's hand is something that
that we really struggle with.And and it creates ideas like
the accountability gaps that goalong with that, and ensuring
that as horrendous as war isthat there's some sort of
ethical core to that that is atleast available to the common

(12:58):
soldier that's making decisionsin the heat of the moment.

Daniel (13:01):
Yeah. This is a really interesting one, Chris, because
I I also I mean, I know you andI have talked about this and
you're also a pilot and thinkingabout airplanes, been watching a
lot of with my wife, there'sthis I think it's been produced
for like a very long time, butit's on streaming now. This

(13:21):
series of shows, I think it'scalled Mayday Air Disasters. And
it just highlights a differentair disaster, air commercial
airline disaster that happens,has happened historically and
kind of what happened. And theygo through the investigation,
and some of the clips are kindof funny.
Not in terms of what happenedbecause, obviously, they're

(13:42):
tragic, but in terms of how it'sproduced. It's very, very
literal, and some of the actingis maybe not top notch. But what
I've learned through that showis I have been surprised at the
amount of kind of informationthat pilots, let's say, have to
process and given a certainstate or fatigue or even just

(14:04):
unclear leadership, like who'sin charge in a cockpit can lead
to just very irrationaldecisions. And so I do
understand the argument, whetherit's in terms of, weapon systems
or flying airplanes or drivingcars, certainly people aren't
always making those rationaldecisions in terms of how they

(14:26):
process information.

Chris (14:27):
I totally agree. There is, know, but there's also it's
kinda funny as we talk aboutthat and talk about the, you
know, the flaws that that humanshave in in these processes. And
and and we certainly do in that.There's also kind of the notion
of when you're processing thatmuch information and knowing you

(14:51):
have a lot of information youhave to put in contextually of
the in the in the emergency ofthe moment. And pilots do that
quite a lot, you know, as thingshappen that, you know, a,
there's that there's thatuniqueness of the human brain
being applied to that, that wedon't yet, you know, feel that
autonomy can take that all theway.
And and certainly, if you pullthe flying public right now, you

(15:15):
know, in terms of airliners andstuff, and other places where
they would put their own livesinto the hands of autonomy, most
people still, and I don't have aparticular poll in mind, but
having seen a bunch of them overthe last couple of years, would
argue that they that they arenot comfortable not having a
human in that. And once again,that's, that is the the notion
of having someone that cares forthem, that is in control, that

(15:39):
you have trust with, you know, atremendous amount of expertise,
being able to, to to ensure thata good outcome occurs. And, and
I think that that's an importantthing to accept and recognize
that that people are not thereyet. They're not to the point in
general. Now I will I will noteas someone who is who is in the

(16:01):
military, industrial, you know,complex world, that there's a
lot more autonomy on themilitary side than the civilian
side.
But I think there is we need toget to a point where where our
autonomy can match theexpectations that we already
have in our human in our humanoperators.

Sponsors (16:35):
Okay, friends. Build the future of multi agent
software with Agency, AGNTCY.The Agency is an open source
collective building the Internetof agents. It is a collaboration
layer where AI agents candiscover, connect and work
across frameworks. Fordevelopers, this means
standardized agent discoverytools, seamless protocols for

(16:57):
inter agent communication andmodular components to compose
and scale multi agent workflows.
Join Crew AI, LangChain, LambdaIndex, Browserbase, Cisco and
dozens more. The agency isdropping code, specs and
services. No strings attached.You can now build with other

(17:21):
engineers who care about highquality multi agent software.
Visit agency.org and add yoursupport.
That's agntcy.org.

Daniel (17:36):
Okay, Chris. Continuing on in relation to the arguments
for autonomy, meaning AI orcomputer systems totally in
control of things like weaponsystems, or other things in a
military context or otherautonomy. So the next argument

(17:58):
that I'll put forth in terms ofpro autonomy or the affirmative
here is that we've been talkingabout a lot about kind of human
morality or their intentionsinfused in that. The claim here
would be that responsibilityessentially remains like,
there's no unclearresponsibility here. So some

(18:20):
people might say, oh, if youhave an airline crash and it was
the computer's fault, right,then where do you put the blame?
And here I would say, or inexpressing this kind of part of
the argument, the argument wouldbe while responsibility still is

(18:41):
clear, it just remains with thedesigners or the commanders of
these systems, just as withother kind of automated systems.
And if you think of automatedweapon systems, they don't
disrupt these kind of legal andmoral frameworks because it's
the designers of these systemsthat actually impose those

(19:02):
frameworks or should impose themwithin the systems. There's no,
in this case, the argument wouldbe there's no significant
difference between kind of humansoldiers or autonomous weapons
in terms of criminalresponsibility because
responsibility lies with thosewho design, program, or

(19:22):
authorize deployment of thesemachines, not with the machine
itself. These are justinstruments of the will of their
developers and those responsiblefor employing them. So that's
the argument.
What's your thoughts? And I

Chris (19:41):
think the counterargument on that is gonna be that, you
know, reducing civiliancasualties alone through
automation is not enough andrecognizing that automation can
follow rule sets like the likethe laws of war and other other
applicable laws, that we don'twant to lose the notion of of

(20:01):
our moral of moral decisionmaking, and the emotions that go
with that. We have empathy. Wehave the recognition of moments
where under a particular rule orlaw, an action might be
authorized, but that there mightbe a kind of a human notion of a

(20:21):
moment of restraint because yourecognize the distinction in a
you know, that that is complex.The world is not simple. It's
not black or white.
And that these kind of qualitiesof recognizing the situation
that there are lots of ancillaryconcerns that aren't necessarily
covered under strict rules ofengagement, that those matter,

(20:42):
and that we care about those.And that the the you know, if
you put in a model that'strained on reacting to specific,
you know, tactical situationsand is is following the rules,
and so it's entirely legal, thatthey still aren't able to

(21:02):
recognize those complexities andthey they're not able to have
that empathy and that you can'treplicate that with present
technology or near futuretechnology to to take the human
out of the equation. So it it'sone of those it kinda depends on
how you're how you're seeingthat in terms of do you want to

(21:22):
take that those human qualitiesentirely out of the equation and
rely on autonomy? And that's Ithink that's one of the great
questions of our time.

Daniel (21:31):
Yeah. I I have a follow-up question here, Chris.
Sure. Maybe it's a proposition.I I don't know.
But you mentioned how many rulesor or things that a human
intervening in a situation, likethey they make kind of decisions
in gray areas, things are maybefuzzy. I'm wondering if, like

(21:54):
here, we're framing this verybinary, right, automation or or
not automation. One of thethings I found just practically
outside of this realm and inputting in automation with
customers of ours that are inmanufacturing or healthcare or
wherever, is there could be anautomation and you can analyze

(22:19):
the inputs to that automation tounderstand if things are similar
to things you've seen in thepast or different from those
you've seen in the past. So forexample, maybe you're processing
clinical documents for patients,right? And there's a set of
scenarios that you've seen quiteoften in the past and you've

(22:43):
tested for, and you have acertain level of confidence in
how your AI system will processthese clinical documents, maybe
make some decisions and output anotification or an alert or a
draft of a email or whateverthose automations are.
Then you could receive a set ofthings on the input that is very

(23:05):
dissimilar to what you've seenin the past. Oh, this is a
unique situation for thispatient in healthcare or
something. You kind of don'tknow what the AI system is gonna
do, right? And often we wouldactually counsel our customers
and our partners. Well, that'san area where maybe you wanna

(23:27):
flag that and just not have theAI system automated, but those
are the things maybe that shouldgo to the human.
So I just wanted to call outthat kind of scenario where
we're talking in a very binaryway, like, it automated or is it
not? There's kind of this areawhere it's kind of automated and
kind of not. And you're lookingat the situational awareness of

(23:53):
what's coming into the system inlight of the distribution of
data that seen in the past.

Chris (24:00):
You're posing a really interesting question. And I
guess I would frame that asyou're looking at, you know,
these different these differenttypes of outcomes and and and
based on the data that you'redoing that that within these
structures, there are there'sthis kind of notion about where
the human fits in real inrelationship to that. Mean, you

(24:23):
kind of raise that up about, youknow, the autonomy raising those
issues up to the human andstuff. And that's, I think
that's a big part of figuringout in the path forward how what
that relationship looks like.Because I think I think it
varies quite quite widely acrossuse cases there.
And industries what you'relooking at. I know in in my
industry, there's the notion ofkind of a human in the loop

(24:46):
versus a human on the loop. Andso there is there is autonomy
where humans and and theautonomous systems are working
as partners, in a very directway. And some people might say
the human perspective on that isto the autonomy kinda makes the
human who's in the loop kind ofsuperhuman by giving them a lot

(25:09):
more ability to process theright information at the right
time to make the rightdecisions. And then contrasting
that against the notion of ahuman on the loop, which kind of
is kind of what you mentioned,where it raises something up,
that might be an exception or aspecial case or a specific thing
in a larger quantity of datathat you're trying to bring to

(25:29):
attention as a as a key finding.
And they they suit different usecases, but it's not always very
clear what those are when you'retrying to design a system that
does that. So I think I thinkthere's a lot of room for both.
I think that right now we tendto look at human in the loop
more often. But I will say assomeone just as making an

(25:51):
observation about the industrythat I'm in, that with in this
particular case, with the withthe the nature of warfare is
changing rapidly and the speedof war is chain is increasing
rapidly. And we're challengedwith what can a human do if if
things are speeding up all thetime.

(26:12):
So, yeah, you've raised somereally great questions there in
terms of how to solve for someof these, especially considering
what we've talked about about,you know, the ethics versus the
deferring to the autonomy.

Daniel (26:26):
Yeah. I'm glad that you brought up, like, speed and
efficiency here. I guess that'sone while you were talking,
that's one thing I wasconsidering. And and maybe I'll
give just a couple examples. Itcould be that in, let's say, an
aircraft situation, I'll go backto that.
Maybe it's just because I'vebeen watching that show with

(26:47):
with my wife. But in in thatsituation, you are may maybe a
situation comes up and yourealize the automated system
should alert the human operatorsof something. Well, if the human
operators haven't been tuned into what's going on with the

(27:07):
system, right, in this case, thethe aircraft, that alert could
happen and they could have togain so much context and so much
information to understand whythe alert is happening, what
systems are malfunctioning,etcetera, that by the time they
catch up, right? Then theplane's crashed or something

(27:29):
like that, right? And so in thatsense, the speed efficiency
piece is really relevant.
Was also thinking in a maybeless life or death type of
situation, if you're using thesevibe coding tools and you're
creating x web app or somethinglike that, you're creating this

(27:50):
new system, this new projectthat you're working on, you
could have those automatedsystems working for lots of
time, creating lots of things,then, oh, all of a sudden,
there's this error that the Vibecoding tools can't resolve.
Well, then you step into thatand you're like, I don't know

(28:12):
anything of what's going onhere. What files have been
created? How is this configured?What is really the context of
this error?
And it could be more inefficientfor you to actually debug and
deduce that than it would havebeen the overall efficiency kind

(28:36):
of would be less than if you hadjust written all that code. Now
that may not always be the caseand different scenarios happen,
but that element of speed andefficiency really strikes me as
a key piece of this element ofwhat we're talking about.

Chris (28:54):
Yeah. It's funny. Your your example really resonates
with me because I think that'sas I've tried vibe coding myself
and stuff compared to more, youknow, more conventional
programming. And I think I'verun into that. And it kind of
depends.
There's a lot of context ifyou're not only running into
problem, but you have to decideto some degree, there are there

(29:15):
may be extraneous reasons tostructure to pick certain tools
and certain things. And ifyou're vibe coding and and
essentially the model may have alot of freedom to choose how
it's structuring, you know, whatyou're building and the tools
that it's using and things likethat, that that may have an
impact. And I that's I thinkthat's where I run into vibe
coding challenges myself isgiving the model the autonomy to

(29:39):
make choices that I really thinkI should, whether I'm right or
not, whether I think I should bemaking myself that may have to
do with performance orconstraints or future team
skills or whatever else we mayhave. So a great point you're
making there.

Daniel (29:54):
Okay, Chris. The the third affirmative claim that
that I'll express from from theside of, you know, pro autonomy
in things like weapon systemsconflict and maybe other other
areas of autonomy is thatautonomy could actually reduce

(30:17):
harm, casualties, sufferingbecause autonomy actually
results in greater precision orconsistency. So banning these
sorts of systems on principlemay, you know, may cause more
harm than good essentially. And,you know, you could kind of play
this out. A couple of thingsthat are said in the in the

(30:39):
paper again that were kind ofinspiring this conversation
around by by Heller, is, youknow, it it's only a matter of
time before these systems maybecomply with international
humanitarian law better thanthan human soldiers or, you
know, the these systems, I wouldI think it it would be true

(31:01):
maybe at least in certain casesnow, these systems have more or
have the potential to be moreprecise than even the most kind
of most precise non autonomoussystems in terms of targeting or
performance or operation in somesort of way.

(31:23):
So the argument here is that akind of outright ban would
actually result in moresuffering rather than less
suffering. So how does thatstrike you?

Chris (31:35):
Well, before before I make a counterargument against
it, it's interesting that it'ssomething that you noted there.
I'll just throw out a two secondexample. You know, the the the
Russian invasion of Ukraine hashas been going on now for for
several years. And the one ofthe you know, there there is a
as reported in the news, there'squite a bit of small autonomy

(31:55):
drone warfare going on. And thatand one of the things that were
that we've seen there, acombination of both human, I
think this will kinda lead intomy side a little is that you may
have drone operators flying, butoften when they make the strike,
the final bit of strike has beenmade autonomous, where the
actual hit itself in the lastlast couple of seconds will be

(32:19):
autonomy driven to your pointabout, you know, precision and
such there.
That's a little bit youdefinitely have a human in the
loop through most of thatoperation and then that human
has to decide to go fullyautonomous for the precision
thigh. But, you know, on the onthe counter side of that, it's
important to note that as youare bringing more autonomous

(32:41):
capabilities, especially whenyou're moving the human out of a
direct loop and to be just kindof human on the loop where
they're kind of overseeing theautonomy versus directly, you
know, directing the weaponitself. There's a dehumanizing
aspect to that. There is thisnotion about I have a tool. And
I'm as a human, I don't evenneed to drive my tool to the

(33:02):
bitter end and to strike myobjective.
I can just let the machine takeit home. And I think there is
something that is potentiallyrevolting to a lot of people
about the notion of kind of thesense of devaluing a human life,
that even though that may beyour adversary, that your your

(33:23):
mission is to go eliminate thatadversary, that it's still a
human being and that you'restill just outsourcing to your
machine to go take care of yourcurrent problem, which is your
target. And that is somethingthat that that is a big struggle
in this ethical conversationthat we're having is is even if
it is your enemy and youradversary that you're trying to

(33:45):
to address in the in the mannerthat you have to do it, does you
know, what does that mean tooutsource it? Does that make it,
you know, even even moredehumanizing and inhumane to do
it that way? So I think bothsides have really interesting
arguments.
And I hope for listeners who'vebeen listening so far, you know,

(34:05):
seeing kind of that, there's nota right or wrong answer to
either side on this on theseissues. They both have pros and
cons so far.

Daniel (34:14):
Yeah. Well, there's probably some people out there
that would would take a, binarypoint of of I'm

Chris (34:21):
sure there I I

Daniel (34:24):
generally I I actually, you know, I I don't know if
people saw the the movieConclave, but but the the guy in
the his sort of opening homilyin in that talks about certainty
as this as a kind of evil andand and potential harmful thing.

(34:45):
I think there are some there'sdefinitely at least some nuance
to this from my perspective andthat needs to be needs to be
discussed. And it's good to seepeople engaging on both sides of
this, whatever level ofcertainty you might have. But
yeah, one of the things thatcame to my mind as we were

(35:06):
talking about this, Chris, is sothere's one side of it that I
think is very relevant, which isthis dehumanizing element, like
you're saying, what are theactual implications? I have to
stare at my enemy in the face.
I don't have to sort of see thishappen. It sort of happens one

(35:27):
step removed, which I think isvery valid actually. I think
that is quite concerning from mystandpoint. Also, think the
point that I was thinking ofwhile you were talking was from
a kind of command and controlside or a leadership side, like

(35:48):
those that are actually makingsome of those decisions, I think
you could imagine and probablyfind a lot of scenarios where
leadership has made a decisionthat they expect to be carried
out by their troops or maybe inthe enterprise or industry

(36:09):
scenario, their employees,right? And either by way of
miscommunication or by way ofoutright refusal or malicious
intent, that plan is not carriedout to a result that actually

(36:31):
there is something suffers,whether that be human suffering
or commercial suffering in termsof loss and that sort of thing.
So I think that there could bethis element of if kind of you
are in control of autonomoussystems and those autonomous
systems actually carry out yourintent and that leadership that

(36:55):
you're putting in or the commandand control is well intentioned
towards the greater good, thenyou can be a little bit more
sure that those things arecarried out. So yeah, I get the
sense that this is part of theargument too in the paper.

Chris (37:10):
Yeah, I agree. And command and control is hugely
relevant and important to thesesystems and not just in the
military context, but inaviation and in all of the
different places, know, it canbe a factory, that kind of thing
that that's one of the cruxissues that that kind of puts

(37:31):
all of these into the context inwhich they're being applied. And
we also another thing to throwin there as well is is is, know,
some of the legal frameworksthat we have, both in the
country that we're in, which isThe United States Of America and
elsewhere throughout theinternational community. There's
a lot of constraining that we'retrying to do. For instance,
within The United States, youknow, we've talked a little bit

(37:55):
about subscribing to theinternational laws of war, which
all modern countries are shouldbe doing, you know, that that
are engaged in in theseactivities because, you know,
ethics is kind of built into alot of that stuff to keep us on
the right track when we areengaged in conflict.
But in addition to that, like inThe US, the the Department of

(38:18):
Defense has directive 3,000 doto nine, which is called autonomy
and weapon systems. And it's fartoo detailed. We're not gonna go
into it on the show here andeverything. But if if this is a
topic of interest to listenersout there, you can Google that
out there and read thedirectives and kinda see where
things are in present state. Andthere's a lot of discussion as

(38:41):
we talk about autonomy in thesecontexts about do we need to
update some of those torepresent what is our current
choices and our ethicalframeworks about how we're
seeing autonomy in our livestoday versus at the point in
time that that that was writtenand that's changing rapidly
right now.
So DOD directive 3,000.09 isanother place to add some

(39:02):
context to this conversationthat we've been having autonomy
and weapon systems. And thenthere's there's a little there's
a something else to note is aswe get into other industries
that are, you know, things likelaw enforcement, things, you
know, that that where wherethere can be conflict, you know,
brought into this by definition,you know, where you're having a

(39:23):
police action or something. Whatis appropriate there when you're
talking about more of a domesticsituation? And where do you want
to go? There's so much that hasyet to be figured out.
And I really I really hope thatlisteners whether or not this
is, anywhere close to theindustries they're in, it still
affects their lives. If you lookat the disturbances and

(39:43):
government actions that arebalancing each other, regardless
of what your politicalinclinations are, that we're
seeing out there, you know,think about what's happening and
what the responses are andwhat's appropriate and where do
these new technologies fit intoa world that is rapidly, rapidly
evolving? Joy, I hope people arereally thinking deeply about

(40:06):
this and joining in on thedecision making. It's a good
it's a good moment in history tobe an activist.

Daniel (40:13):
Yeah. Yeah. And if I'm just sort of summarizing maybe
for you know, there's probablyonly a small portion of our
audience, or I don't know, maybeit's small, that is actually
involved in kind of autonomousweapons sorts of things. But I
think for the wider group thatare building AI driven products,

(40:37):
AI integrations into otherindustries, there are some
really interesting key takeawaysfrom this back and forth
discussion. And I'm just, in mymind, summarizing some of those,
Chris.
One, from that initial point isthat when you're creating AI
integrations into things andmeasuring performance, to some

(41:00):
degree you want to think about,well, is the actual human level
performance of this? And that isusually and always not 100
accuracy or 100% performance,right? Humans make mistakes. And
so when you're thinking aboutthose implementations, whether

(41:20):
it be in machine translation orother AI automations or
knowledge retrieval, how would ahuman actually perform in that
scenario and maybe do some ofthose comparisons. Absolutely.
Yeah, I think a second pointwould be, there is some
responsibility if you'recreating automations that lie

(41:40):
with the designers and thebuilders of these systems. And
so you should take thatresponsibility and take
ownership of that. And then Ithink finally from that last
piece of conversation, I lovehow you said it, Chris, that we
should not dehumanize those thatwe're serving. That doesn't mean

(42:02):
we can't use autonomy in anynumber of scenarios, but we
should value human life andvalue those that are going to be
using our systems and actuallynot try to distance ourselves,
but have that empathy, which hasthe added side benefit that

(42:23):
you're going to create betterproducts for people, you're
gonna create better integrationsthat they want to use, and
you're gonna enhance theiragency hopefully with AI. I
don't know, those are a fewinteresting summary points maybe
from our discussion.

Chris (42:39):
Absolutely. And and there was one thing I was wanting to
note that this has been a reallyinteresting conversation from my
standpoint, in that, at thebeginning of it, you assigned me
the the role of kind of the thecaution toward autonomy as
opposed to the the full in,which being in the industry I'm
in is actually I am actuallyfairly pro autonomy in a lot of

(43:02):
ways. And partially because I'vedeveloped working in the
industry, I've developed a senseof confidence, not only in the
technology, but in the peopledoing it, because they're all
very human themselves. But Ifound it was really instructive
for me to remind myself to to toplay the part of the caution
side, just to remind myselfabout all these points that I

(43:25):
really care about as a human.And, so I think that actually
worked out much better, thanthan if I had been the one that,
you know, kind of, pro all in onautonomy kind of kind of
position on that.
So I wanted to thank you. It'sbeen a very, very good
discussion on this back and Andit's really got some, it's

(43:45):
making me think especially asyou would say something that I
normally would be finding myselfsaying, and then I'm thinking,
okay, well, it's time for me tothink about that other point
there, so I appreciate that.Yeah. Good thoughtful
conversation.

Daniel (43:57):
Yeah, this was great. Hopefully it is a good learning
point for for people out there.I'll remind people as well that,
one of the one of the thingsthat we're doing now is we're
listing out some some goodwebinars for you all to join,
the conversation, our listenersto join the conversation on
certain topics. So if you go topracticalai.fmwebinars, those

(44:20):
are out there. But also we wouldlove for you to engage with us
on social media, on LinkedIn,Blue Sky X, and share with us
some of your thoughts on thistopic.
We would love to hear your sideof of any of these arguments or
your perspective on thesethings. So thanks for thanks for
listening to this to thisoriginal, our first try at a Hot

(44:44):
takes and and debates. Don't

Chris (44:45):
forgive us.

Daniel (44:46):
Yes. Exactly.

Chris (44:47):
Our experimentation here.

Daniel (44:48):
Let's let's do it again, Chris. This was fun. Sounds
good.

Jerod (44:58):
Alright. That's our show for this week. If you haven't
checked out our website, head topracticalai.fm and be sure to
connect with us on LinkedIn, X,or Blue Sky. You'll see us
posting insights related to thelatest AI developments, and we
would love for you to join theconversation. Thanks to our
partner Prediction Guard forproviding operational support
for the show.
Check them out atpredictionguard.com. Also,

(45:21):
thanks to Breakmaster Cylinderfor the Beats and to you for
listening. That's all for now,but you'll hear from us again
next week.
Advertise With Us

Popular Podcasts

United States of Kennedy
Stuff You Should Know

Stuff You Should Know

If you've ever wanted to know about champagne, satanism, the Stonewall Uprising, chaos theory, LSD, El Nino, true crime and Rosa Parks, then look no further. Josh and Chuck have you covered.

Dateline NBC

Dateline NBC

Current and classic episodes, featuring compelling true-crime mysteries, powerful documentaries and in-depth investigations. Follow now to get the latest episodes of Dateline NBC completely free, or subscribe to Dateline Premium for ad-free listening and exclusive bonus content: DatelinePremium.com

Music, radio and podcasts, all free. Listen online or download the iHeart App.

Connect

© 2025 iHeartMedia, Inc.