All Episodes

February 28, 2024 20 mins

On today’s episode, the US military’s mysterious project to bring modern artificial intelligence to the battlefield — told by the defense official behind it, whose job was so secretive he couldn’t even tell his wife about it. Bloomberg’s Katrina Manson takes host Saleha Mohsin behind the scenes for an unclassified look at Project Maven.

See omnystudio.com/listener for privacy information.

Mark as Played
Transcript

Episode Transcript

Available transcripts are automatically generated. Complete accuracy is not guaranteed.
Speaker 1 (00:03):
Bloomberg Audio Studios, podcasts, radio news.

Speaker 2 (00:08):
For a long time, I had a job that wasn't acknowledged,
and my wife didn't know what I did for a
living other than there was a paycheck coming in.

Speaker 3 (00:16):
This is Will Roper. Back in the mid twenty tens,
he was in charge of a then secret office within
the Defense Department called the Strategic Capabilities.

Speaker 2 (00:25):
Office that meant developing new systems and new strategies building
the future of warfare.

Speaker 3 (00:31):
Ruper was fascinated by how big tech was using artificial
intelligence to scan photos and identify objects and what it
could do for the US military.

Speaker 2 (00:40):
A lot of what industry was trying to do. Being
able to identify buildings and roads and objects, it seemed
very traceable to battlefield analogs.

Speaker 3 (00:51):
Roper would go on to propose what would become a
powerful piece of AI technology for the military, project Maven.
But the journey to build me and bring it to
the battlefield hasn't been easy. There are still big questions
about whether the US military should even be using algorithms
in warfare.

Speaker 1 (01:08):
They use this language that'll always be a human in
the loop. But at the same time, the whole point
of using AI is to speed up, and people talk
about decisions speeding up so fast that they're faster than
a human can catch up with.

Speaker 3 (01:24):
That was Bloomberg reporter Katrina Manson.

Speaker 1 (01:26):
Today.

Speaker 3 (01:26):
On the show, she takes us inside the promise in
the Peril of AI in the US military. We go
inside Project Maven, the Pentagon's flagship artificial intelligence effort. You'll
also hear from the US intelligence official who oversees much
of Maven, and the CTO of Scentcom, the US Central
Command whose troops are actually using it in battle. From

(01:50):
Bloomberg's Washington Bureau, This is the Big Take DC podcast.
I'm Sleiah Mosen. My colleague Katrina Manson covers cyber and
emerging tech with a focus on national security. In the
past few years, she's been hearing a lot about how
AI could be the biggest thing to transform warfare since

(02:12):
the invention of the radio or the machine gun. But
she says the US military's big effort to bring AI
to the battlefield really kicked off in twenty seventeen with
Project Maveam.

Speaker 1 (02:22):
The Defense Department now likes to say they've been using
AI for sixty years, but it was really that announcement
that said, we want to look at AI machine learning
algorithms and we want to find battlefield applications for it.
As Will Ropea, one of the people who came up
with the idea, said he always saw it as a
way to find a better way of doing automatic target recognition.

Speaker 3 (02:46):
Automatic target recognition, in this case, scanning satellite imagery and
other data inputs to identify potential threats and enemy locations
on a battlefield and making sure the US doesn't misidentify
non threats as targets and end up deploying a weapon
against a hospital instead of a weapons factory. Until this point,

(03:07):
the military relied on people to comb through hours of
video footage and satellite imagery to identify these targets. But
Roper thought a computer could do this faster, maybe even better.

Speaker 2 (03:18):
In a real battle, there's so much to see and
observe to try to get down to the handful of
targets that you need to prosecute with weapons, And it
was clear that processing had gotten to be so good
and there was so much data available that you could
build these very deep neural nets.

Speaker 3 (03:36):
But in twenty seventeen, the US military wasn't familiar with
deep neural nets, and it definitely couldn't build them.

Speaker 2 (03:43):
The Internet of Things did not happen to the US
military when it happened to everyone else. The US military
has amazing hardware, but the connectedness between it is not.
So Military members leave their lives where they're connected to
almost everything. They inner military lives where they're connected almost nothing.
What most people, I think expect is that aircraft and

(04:06):
ships connect like our smart devices do, and they don't
not even close.

Speaker 3 (04:11):
So Roper proposed the military make a major push focused
on automatic target recognition. But when he got into a
room with senior defense officials to pitch his idea for
what would become Project Maven, he didn't exactly get the
response he wanted.

Speaker 2 (04:25):
Well, skeptics are pretty much what filled the halls of
the Pentagon. It's not a place of great creativity and innovation.
And I remember still having to put together a tutorial
presentation about why this would feasibly work, why this was
a big trend in commercial industry, and why we needed

(04:47):
to be on the right side of the trend.

Speaker 3 (04:49):
Like any government bureaucracy, the Pentagon has a lot of
red tape. Its approaches, nothing like the move fast and
break things mentality of Silicon Valley. That's in part for
good reason. The stakes are incredibly high when it comes
to anything the US government decides to back, but especially
when it comes to warfare. But some of those Pentagon
officials did take a chance on Roper's idea, and in

(05:12):
April twenty seventeen, the Defense Department announced Project Mavin, a
way to use AI and data for actionable intelligence. What
that meant was developing machine learning algorithms and then constantly
retraining those algorithms with up to date, well labeled data.
But all that outdated military tech that Roper was dealing
with made that hard. My colleague Katrina spoke to one

(05:36):
of the experts who worked on testing and evaluation in
the early days of MAVEN.

Speaker 1 (05:40):
And she said it was really not very good at all.
They didn't have the data labeled. There was no sense
of everything that we'd been doing. When we click online
and say, yes this is a traffic light, Yes this
is a cat. There wasn't that same store of images
for tanks or weapons factories.

Speaker 3 (05:57):
So Roper and his team had people go through images
and make annually label them.

Speaker 2 (06:01):
There were thousands of people participating and labeling data so
that we could train algorithms.

Speaker 3 (06:07):
They also needed some expertise to develop the program. The
Pentagon partnered with some commercial tech companies, ones that re
leaps and bounds ahead of the military when it came
to AI, but in twenty eighteen that became a problem.

Speaker 1 (06:20):
Google was involved in the early days of Project Mabon,
and in twenty eighteen thousands of Google workers protested. The
key question was is just applying that ability to recognize
a cat to a tank? How much moral culpability did
those workers then feel? Because what might happen next is
the tank might be exploded, and if it made a mistake,
a human might be killed, or the targets may eventually

(06:44):
involve places where humans are or civilians are, And in
a letter to their seniors said we do not want
to work on warfare technology.

Speaker 3 (06:52):
It brought Project me even into the spotlight, splashed across
the news quote.

Speaker 4 (06:57):
We believe Google should not be in the business of war. Therefore,
we asked that Project mayven be canceled. That Google enforced
a clear policy stating that neither Google nor its contractors
will ever build warfare technology.

Speaker 3 (07:10):
Did the protests give people inside the military any pause?

Speaker 1 (07:14):
I think they did. I think the military also realized
it really had to grapple with, well, what are we
actually asking big tech to do? These are not people
who signed up to serve the flag, and the way
that they have the missions are completely different, the cultures
are different. At the time, you had senior defense officials
flying over to the West Coast to try and understand

(07:34):
what this completely different culture was about. And since then
the people in charge of Project Maven had to pivot
and find other partners that already were partners besides Google.
But I think the Pentagon went through a renewed kind
of outreach to Silicon Valley.

Speaker 3 (07:49):
And with the help of those other tech partners and
thousands of workers labeling the data, the algorithms behind project
may even got more and more accurate. An Army colonel
named Joseph Callahan, who runs weapons fires, started taking an
interest in MAVEN. He was asked to start experimenting with
it in his unit, the eighteenth Airborne Court in North Carolina.

(08:12):
He ran live ammunition exercises using the computer algorithm to
identify a target like a tank, and then having a
human deploy a weapon to explode that target. Testing the
tech is a crucial part of the process. The whole
point of machine learning is feedback, trying out an algorithm
and then telling it when it's right or wrong so

(08:32):
it can learn and do better next time. But just
because the boss wants his workers to start using a
shiny new tool doesn't mean everyone will fall into line.
Katrina told me about one officer named Joey Temple.

Speaker 1 (08:45):
The very battle hardened targeting officer served five rounds in
Iraq and he was offered a demonstration of what's called
Maven Smart System.

Speaker 3 (08:54):
The platform that houses Maven in other data streams.

Speaker 1 (08:57):
He said, no, I don't want a demo. I don't
want to demo. And he told me that he had
a sense that stuff like this doesn't work. He didn't
need another tool. He was fuddled by hearing so much
about AI when he'd really never worked with AI before,
and he was an expert. He is an expert targeter.
He's done this his entire career.

Speaker 3 (09:15):
Initially, Temple wasn't interested, but the next month, in August
of twenty twenty one, something changed his mind. Coming up
how Project Maven won over battle hardened soldiers like Joey Temple,
and how it's being used on actual battlefields today. At first,

(09:41):
senior targeting officer Joey Temple wasn't interested in using Project Maven,
but that changed in August twenty twenty one.

Speaker 2 (09:49):
Chaotic scenes as US troops at Couble Airport strove to
evacuate Americans as well as Afghan civilians.

Speaker 3 (09:55):
It was the evacuation of US troops from Afghanistan the
military had to air. I looked one hundred and twenty
thousand people out of Kabble under very dangerous conditions, and
Temple was put in front of a Maven scream that
mapped out threats on the ground in real time.

Speaker 1 (10:11):
He could see people walking around the roots of planes.
For him, seeing Maven's small system operate like that was
a bit of a light bulb moment.

Speaker 3 (10:19):
The folks in charge of Maven are relying on those
sorts of light bulb moments. The military needs officers willing
to use Maven to train its algorithms to get better,
and to link it up to data and censors and
actually make it operational in the way the military runs
its battles. Otherwise it's pretty much useless.

Speaker 1 (10:37):
You do tend to get experiences like that of Joey
Temple who really are reticent at the start. And he
even told me he has people under him who say, no,
I don't want to use this. I mean, he calls
himself a skeptic. He is constantly going to play Devil's
advocate because what's so important for him is to know
is this system really reliable. That's what I found so interesting.

(10:59):
How you start integrating what military folks like to call
human machine teaming. What is that actually like from the
perspective of the person on the ground who's been told here,
there's a newfangled toy, use this. Those people have very
very deep relationships with their fellow combatants, so that the
people they go to war with they trust. That's a
really fundamental idea of US warfare, of any warfare. What

(11:23):
if you've got to trust an algorithm.

Speaker 3 (11:24):
The military is now asking soldiers of the eighteenth Airborne
Corps to trust that algorithm. It least in experiments on
their base. It's now the largest operational test bed for MAVEN.
Last year, Katrina flew to the base in North Carolina
to see it in action herself.

Speaker 1 (11:40):
I was finally given an unclassified demonstration of it, and
it was incredibly revealing. To see it because it wasn't
quite what I expected, and they did a mock up
for me using unclassified satellite imagery, but it represented a
real operation that the US had conducted several months before.
And it's very much like looking at a map on

(12:01):
your phone, only this had yellow boxes blue boxes, and
the blue boxes meant don't hit this, this is not
a target. We've identified this as a school or a
hospital or friendly forces, and yellow boxes meant what they
call points of interest. So in this case, if you
weren't asking the algorithm to find ships, it would box
out all the ships, and then you might be able

(12:23):
to ask it to find a specific kind of ship,
maybe a warship, and they can layer that in using
lots and lots of different data FEOD sources.

Speaker 3 (12:31):
Not just the satellite photos and videos that MAVIN was
originally trained on. Now the program incorporates and analyzes so
much more data. Katrina got to hear about that from
Mark Manzill. He's the director of Data and Digital Innovation
at the National Geospatial Intelligence Agency, which since last year
has run most of MAVEN. The agency has been working

(12:52):
closely with troops to test field and improve MAVIN on
the ground.

Speaker 5 (12:56):
We also use radio waves bouncing off of objects, and
we also use infrared to see heat light, and we
also use multi spectral remote sets in so not just
the visible light spectrum, but also the invisible like spectrum.

Speaker 3 (13:13):
Mancell told her the way he sees it, there are
two big advantages of using MAVEN and military targeting.

Speaker 5 (13:20):
So speed is very important, right Obviously, the faster that
we can take an image, the faster that we can
run an algorithm and get the results is very important.
But I will say the biggest advantage would be scale.
The fact that we can do it on many images,
you know, as fast as we can is really where
we're providing the advantage.

Speaker 3 (13:40):
An advantage that was just recently proven, not just in
a test, but on the actual battlefield.

Speaker 6 (13:46):
October seven, everything changed.

Speaker 3 (13:48):
That's Skylar Moore, the Chief Technology Officer of US Central Command,
often referred to as Scentcom. Katrina managed to get her
on the phone earlier this month.

Speaker 6 (13:57):
We immediately shifted into high gear and much moig operational
tempo than we had previously. And the reason that I
think we were able to and had a pretty seamless
shift of using NAVAN was because we'd done about twelve
months worth of digital exercises leading up to that.

Speaker 3 (14:13):
More talk Toatrina that the US also used ME even
to identify and narrow down dozens of targets that US
air strikes then fired against Iraq and Syria.

Speaker 1 (14:22):
That for me is the first time that I've really
heard someone talk about using AI to find something and
then deliver a weapon to it that actually belonged to
the enemy and cause damage. It destroyed some, there were casualties,
and so that cycle is really being operationalized.

Speaker 6 (14:38):
We've certainly had more opportunities to target in the last
sixty to ninety days. We have a lot of examples
of the housies presenting those types of threats.

Speaker 1 (14:47):
They're finding rocket launchers in Yemen, they're finding vessels in
the Red Sea, and Maven has become an intrinsic part
of that decision cycle to try and find out where
is there a threat, what do we want to do
about it, and how can we be sure.

Speaker 3 (15:04):
The tech is being used? But it hasn't been perfected yet.
While humans at the eighteenth Airborne Corps can correctly identify
a tank about eighty four percent of the time, Maven
gets it closer to sixty percent, and experts have told
Katrina that that number can dip even lower, as low
as thirty percent when there's snow or cloud cover in
an image, or if it's searching for a new type

(15:25):
of object.

Speaker 1 (15:26):
A lot of the US military imagery comes from the
Middle East. When you move to somewhere like Ukraine, there's snow,
there's cloud cover, there's rain, so the conditions change a
lot and the algorithms stop performing far less well.

Speaker 3 (15:40):
These challenges are some of the things that critics point
to when they express concern about the US military using
this technology. Another big thing that comes up is backlash
against the idea of using autonomous machines which can use
AI on the battlefield at all. There's a coalition of
human rights and expert groups called Stop Killer Robots that's
dedicated to keeping lethal autonomous machines that can use AI

(16:04):
out of warfare. Even UN Secretary General Antonio Guterres has
called for a ban on autonomous weapons systems. Here's Katrina again.

Speaker 1 (16:13):
An algorithm itself can be trained using data that was
incorrect or incomplete. It can also be targeted by adversary attacks.
The data could be changed without knowing the algorithm could
be spoofed, it can be poisoned, it can lose accuracy
over time. That's a very natural thing for an algorithm
to do. It stops being as effective. So you have

(16:34):
to work out at what point do you want to
change retrain work with that algorithm. And then, of course,
when you've got a life and death scenarios where you're
deciding how much trust should be put in this algorithm,
what should we do with it. It might be feasible
to use an algorithm that makes mistakes if it's just
going to blow up water, you know, it's trying to
get a shit, but instead it gets water, a big deal.
But if you're trying to get something on land and

(16:56):
if you miss your hitting civilians or any of course,
the risks are extremely high.

Speaker 3 (17:02):
The US military says that this is why it relies
on human machine teaming. They say that the algorithm itself
will never make the decision to execute an airstrike. It
just identifies a target for a human commander. But Maven
is making those decisions and those links to weapons faster.

Speaker 5 (17:19):
We want to shorten that kill chain, you know, as
fast as we can.

Speaker 3 (17:23):
Mark Mansell again, he's the agency official who oversees the
office that runs MAVEN.

Speaker 5 (17:28):
So instead of a human having to open up an image,
scan the image, find everything on the image, and then
report that out to a coimbat and commander, we want
to shorten that by using the algorithms.

Speaker 1 (17:38):
To do that, so the combatant commander gets the choice
of what to execute fies on.

Speaker 5 (17:43):
And that much faster.

Speaker 3 (17:44):
But as Katrina points out, speeding up the pace at
which targets can be identified and fired on, that can
be complicated.

Speaker 1 (17:50):
And of course there's been a huge amount of work
on confirmation bias and an algorithm, and I think you
have to wonder if an algorithm says this is something
that we should be looking at as a target, does
that make the person who's there to say yes or
no to the algorithm more likely to say yes or
less likely to say yes.

Speaker 3 (18:08):
The US military expects this kind of technology to improve,
but in the meantime they're continuing to use it, and
other countries around the world like Israel and Ukraine are
using similar technology. That raises the question is the world
ready for what that might mean?

Speaker 2 (18:23):
We're not ready for that. In the US military that
a war of algorithms, a war of software.

Speaker 3 (18:28):
Keeping up is the reason Will Roper wanted to bring
AI to the US military in the first place.

Speaker 2 (18:33):
We've always had this human advantage in the US military
because we're always operating that our people are experienced and
trained and they're ready to go, and we've always been
able to say that as a way if we're feeling
a little nervous about scenario, is that we've got the
people who are ready well. With AI, it may be

(18:55):
the first technology where you can undercut human advantage.

Speaker 3 (18:59):
That's a good cernedo folks of the Pentagon and in
government who fear a war with say China, could put
the US on the back foot. But war doesn't wait
on software updates. Katrina learned that in twenty twenty two,
the US helped Ukraine by using MAVEN to share so
called points of interest where Russian equipment was located. Here's
this stat that really stuck with me. During that process,

(19:22):
the MAVEN smart system platform underwent more than fifty rounds
of improvement in just the first ten months. Roper thinks
that the people within the military are going to need
to change the way they think about algorithms, because like
it or not, at this point, there's no going.

Speaker 2 (19:37):
Back, the military is going to have to think about
AI not as much like a piece of computer software,
but almost like a member of the military that you
train and that you trust with a certain amount of
authority based on its training and pedigree. Hopefully those lessons
will be learned before they're needed.

Speaker 3 (19:59):
Thanks for listening to The Big Take DC podcast from
Bloomberg News. I'm Seleiah Mosen. This episode was produced by
Julia Press and Naomi Shaven. It was fact checked by
Alex Sugia. Special thanks to Katrina Manson, Margaret Sutherland, Vicky Vergalina,
and Rebecca Shassen. Ben O'Brien is our mix engineer. Our
story editors are Caitlin Kenney, Wendy Benjaminson, and Michael Shephard.

(20:21):
Nicole Beemsterborer is our executive producer. Sage Bauman is our
head of podcasts. If you like what you heard, please subscribe, rate,
and review the show. It helps other listeners find us.
Thanks for tuning in. I'll be back next week.
Advertise With Us

Popular Podcasts

Dateline NBC
Stuff You Should Know

Stuff You Should Know

If you've ever wanted to know about champagne, satanism, the Stonewall Uprising, chaos theory, LSD, El Nino, true crime and Rosa Parks, then look no further. Josh and Chuck have you covered.

The Nikki Glaser Podcast

The Nikki Glaser Podcast

Every week comedian and infamous roaster Nikki Glaser provides a fun, fast-paced, and brutally honest look into current pop-culture and her own personal life.

Music, radio and podcasts, all free. Listen online or download the iHeart App.

Connect

© 2024 iHeartMedia, Inc.