All Episodes

May 16, 2024 48 mins

For years, a Hawaiian-shirt and flip-flop wearing, smack-talking entrepreneur has been promising to disrupt the US defense industry. Now, Palmer Luckey, founder of autonomous weapons startup Anduril Industries, isn’t just talking about it, he’s doing it. Emily Chang and Palmer Luckey sit down to discuss building Anduril, his relationship with Silicon Valley, how autonomous weapons are changing the battlefield in Ukraine, and culture clashes between Big Tech and the defense industry. 

See omnystudio.com/listener for privacy information.

Mark as Played
Transcript

Episode Transcript

Available transcripts are automatically generated. Complete accuracy is not guaranteed.
Speaker 1 (00:01):
I'm Emily Chang, and this is the circuit. If you
told me that one day I'd be driving a warship
in the Pacific with Palmer Lucky, I probably wouldn't believe you.
Are you sure I'm not gonna put the boat?

Speaker 2 (00:14):
Yeah, You're gonna be fought.

Speaker 1 (00:16):
For the uninitiated. Lucky is the flip flop wearing Hawaiian
shirt sporting creator of the Oculus VR headset. After selling
his company to Meta and then getting ousted from Silicon
Valley over issues we'll get into later, Lucky switched gears
to the defense industry.

Speaker 2 (00:32):
Some of the United States technology is very bad. Some
of it is actually very very good, but it's also
extremely expensive and not necessarily adapted to the types of
conflicts we're going to see in the future. The United
States has a lot of investment in legacy weapons systems
that are not necessarily having China quaking in their boots.

Speaker 1 (00:51):
Modernizing the battlefield is key here because the steaks have
never been higher Ukraine tensions between China and Taiwan. Lucky
is hoping company Anderil's smart bets on modern weapons infused
with ai can take a slice of the Pentagon's massive
budget and play a critical role as the paradigm of
war evolves. Joining me on this edition of the Circuit

(01:13):
Anderil Industry's founder, Palmer Lucky. What are you building here?
What's the mission?

Speaker 2 (01:19):
Ultimately? Anderil is trying to build radical new defense technologies
that allow the United States to better deter violence against US,
to stop conflicts that would drag us in, drag our
partners in, drag our allies in. We're building tools that
allow our partners and allies around the world to make
themselves a lot more prickly so that people don't want
to step on them.

Speaker 1 (01:38):
Paint a picture for me just how bad is the
technology that the US government is using right now.

Speaker 2 (01:43):
Well, some of the United States technology is very bad.
Some of it is actually very very good, but it's
also extremely expensive and not necessarily adapted to the types
of conflicts we're going to see in the future. The
United States has a lot of investment in legacy weapons
systems that are not necess necessarily having China quaking in
their boots.

Speaker 1 (02:02):
So historically military technology, weapons, vehicles super expensive. Now that's changing.

Speaker 2 (02:09):
If you look at this in terms of the history
of the United States, We've always had our most innovative
companies working hand in hand with the duty and we've
actually won a lot of wars by being better at
the other guy than cheaply manufacturing large numbers of very
good weapons systems. It's actually quite recent that it's gotten
so out of whag. It's very recent that these weapons
systems have become extraordinarily expensive for what they are as

(02:32):
in within living memory. And that's really one of the
problems anerals trying to solve. We're trying to use not
just the right product decisions, but also modern manufacturing techniques
and leveraging a lot of the technology that's come out
of Silicon Valley over the last few decades to get
back to building cost effective and combat effective weapons that
are going to protect the United States.

Speaker 1 (02:51):
So explain the state of play on the battlefield and
where unduraal.

Speaker 2 (02:54):
Fits in well. Interplays across basically every domain. We have
things that are underwater, on the surface of the water,
in air, in space, on land. Our goal is not
to go really deep on one particular area and be,
you know, the AI Torpedo Company or the AI small
quad copter company. It's to become one of the major
defense primes, one of the ones that is large enough

(03:16):
to have an impact not just within their own company,
but externally to change the way that the government does procurement.
I want to be in a position where I can
have that impact, make those changes to the way the
government develops and buys weapons.

Speaker 1 (03:28):
How'd you get interested in defense technology?

Speaker 2 (03:31):
I've been interested in it for a really long time,
probably my whole life. I briefly was able to work
as a lab technician on an army project called Brave Mind,
which was treating veterans with PTSD using virtuality exposure therapy.
And that was actually before I started Oculus. But even
as I ran my company Oculus, I kept in touch
with a lot of friends in the defense industry, and

(03:51):
what I heard over and over again is that it
was broken. The incentives were wrong. They were being punished
for doing the right thing rewarded for doing the wrong thing.
They made more money through these cost plus con tracks
when they took longer than they were supposed to, They
made more money when they were over budget, and that
really got me worried.

Speaker 1 (04:07):
You're trying to run a defense tech company like a startup,
How does that compare to like Locky Martin and Boeing,
and how do you get Washington to accept that?

Speaker 2 (04:17):
And Roll thinks of ourselves as a defense product company
more than a defense contractor. What that means is that
we build products using our own money, and then we
sell them to the customer. This is not a radical
business model in most industries, it's actually quite common, but
in the defense space it's not the way that things
are done. Most major weapons procurement is down on a
cost plus basis. Most new R and D is don

(04:38):
on a cost plus basis, meaning the contractor gets paid
for their time, their materials, and then a fixed percentage
of profit on top. Of course, that incentivizes you to
come up with expensive solutions, to build them using expensive parts,
and to drag it out as long as you can.
Not necessarily consciously, people aren't waking up and saying, ah,
another day, another chance to screw the US tax pair.
But the system naturally wards people who run programs in

(05:02):
these slower ways and these more expensive ways and or all.
We're the opposite, because we're a defense products company that
makes things that work and sell them rather than getting
paid to do work. It means that when we do
something faster, it helps our profit margins. When I spend
less time developing a product, I'm going to make more money.
It means that when I can reuse technological building blocks

(05:24):
that we've built for other programs, sometimes worth hundreds of
millions or billions of dollars in investment, it means that
I'm able to make more money rather than losing out
on money that otherwise would have been paid to me
to redevelop those capabilities from scratch for a given program
as a spoke building block. And those are really all
the incentives you need to be a fast moving modern

(05:46):
tech company style defence company.

Speaker 1 (05:48):
You're not the typical Silicon Valley founder wearing a black turtleneck,
nor are you walking around and talking like a buttoned
up defense contractor CEO, Like, how does that play well?

Speaker 2 (05:58):
I don't know. I am a little bit of a careture,
but it's because I just haven't changed. I'm wearing the
same stuff I was wearing a decade ago, and I'm
doing the same stuff I was doing the decade ago.
I'm blessed with success to do whatever I want, and
I think that's probably what's behind a lot of them
more esoteric figures in Silicon Valley They're people who have
done well enough for themselves that they can afford to
do whatever they want and have whatever image they want
and don't necessarily have to adapt.

Speaker 1 (06:19):
Do you wear a suit when you go to Washington?

Speaker 2 (06:21):
Absolutely? You do.

Speaker 1 (06:22):
Okay, So that's like there's a line.

Speaker 2 (06:24):
So funerals, weddings in Washington, d C. They all get suits. Okay.

Speaker 1 (06:30):
How often are you in DC all the time? And
what are you doing? Is it relationship building? Is it
trying to win contracts and talking to lawmakers?

Speaker 2 (06:37):
It's all the above. A lot of the core of
what we do is kind of centralized in DC, so
not just the lawmakers who have the power of the purse,
who fund new R and D, who decide how things
are going to be procured, but also the decision makers
on individual product sides. It's very different from my last job.
My last job at Oculus, we're trying to sell Virtual

(06:59):
REAC had set to millions of people. We had to
convince them one by one that VR was something they
needed in their lives. And the military is a little
bit different. For a lot of these major programs, the
number of people that you need to convince in order
to make a major sale right, it's actually very very small.
It's a small number of people.

Speaker 1 (07:17):
You also overcame some serious obstacles to get and ural
to this point, right, like, oh, SpaceX, Impalaner. How did
they pave the way?

Speaker 2 (07:25):
First of all, they proved that it was possible for
a new company to win significant contracts with the government
and to use the competitive process to kind of force
themselves into niches that they normally would not have been
able to get into. It It wasn't an easy thing.
I mean, they were literally suing their customers in court saying,
you have to take our proposals seriously. You have to

(07:47):
give them a fair shake, and you have to compare
them against what other people are doing in a good way.
You can't pay other people billions of dollars to make
what we literally already have made for you to buy
as an off the shelf product. That was a really
big deal. Palataer and SpaceX also really paved the way
for ADDERL in that in the last thirty five years,
they were the only defence unicorns. They were only the

(08:08):
only new defense companies that were worth over a billion
dollars that were really at any kind of scale. I
don't want to sound like a rich guy. A billion
dollars is not actually that much money. It's a lot
for a person, but it's not a lot for a
company to be worth. You only need low hundreds of
millions in revenue to be worth that. You're talking at
that point about a fraction of a fraction of a

(08:29):
tiny percentage of the Department of Defense budget. The other
thing that Palataron SpaceX proved is that you could definitely
start a defense company if you bring in really significant resources.
It's no coincidence that the only two companies to break
through in the last thirty five years since the winding
down of the Cold War really were both founded by billionaires.
It's unfortunate, but it reflects the reality that we've created

(08:50):
where this muscle we used to have as a country
of turning small, innovative defense companies into large scale providers
of weapons, we lost it. And the only way to
bypass that was to already have made billions of dollars
somewhere else. But as a country, we need to do better.

Speaker 1 (09:05):
Right, And you've raised over two billion dollars.

Speaker 2 (09:08):
I don't know if I say the exact number, but
a lot of money. Raising money was not easy in
the beginning. I mean investing in weapons was actually explicitly
against the rules and partner agreements for most venture capital firms,
even ones that are quite forward leaning, like my favorite examples,
Founder's Fund. Not only were they the first institutional investor
to write a check into Oculus, my first company, but

(09:29):
even as a forward looking, contrarian, very sometimes controversial ventor
capital fund, they still had their terms for their fund
saying they could not invest in weapons companies, and they
ended up changing that and then investing into Aneral. It
was really easy for venture capital firms in twenty seventeen
when we started an ANERL to say, conflict is over.

(09:51):
We're living at the end of history. This idea of
putting our best brains towards things that can kill people
is a waste of talent and a waste of money
and unethical even and that's not what anyone's saying anymore.

Speaker 1 (10:06):
So what does it take to win?

Speaker 2 (10:07):
What does it take to win? Well? I wish that
it was just building the best thing, because that's what
we want to believe that we live in a pure
meritocracy technocracy where you build it and they will come.
But it doesn't actually work. That way, we have to
be very conscious of what the dods problems are, what
they believe their problems are, and what Congress is willing

(10:29):
to fund. It's a much more complex set of stakeholders
than let's say, selling virtual reality headsets. For example. If
Congress doesn't believe that something is a strategic priority for
the nation, they're not likely to fund it. We also
have to work on things that the Pentagon cares about
because we don't have the luxury of going through a
ten year or twenty year development cycle and just getting
paid to develop something for years and years on end.

(10:51):
We have to build products that we can immediately sell.
So it's not a priority for the Pentagon, an urgent
strategic priority. It's not going to go into production and
purchase fast enough for us. And finally, we want to
work on things that other people are doing a bad
job of. ANDROL does not want to be in the
business generally of building things that have a lot of

(11:11):
great competent affordable competition. If someone is building a great capability,
whether it's a robotic system or a weapons system, or
a sensor system or software system, if they're doing a
good job of solving some problem, for duty. I don't
want to compete with them. I don't want to burn
up my venture capital dollars burning their company down. I
want those guys to succeed. I want us to have
a flourishing ecosystem. I want to probably want to work

(11:33):
with them and partner with them. So typically, where we
are building things, it's things where we think we can
do a lot better and a lot cheaper than the
existing incumbents.

Speaker 1 (11:42):
So let's talk about where the money comes from. ANDROL
has raised billions of dollars. Yep, how much of it
is from vcs? Are you using your own money? How
much is coming from the Pentagon.

Speaker 2 (11:50):
So yeah, we have a lot of money that has
come from venture capital firms and also other investors. You know,
we're not only raising from venture capital firms. There's family
offices that have invested, strategic investors who have invested, but
certainly a lot of venture capital. I'm a big fan
of venture capital as a concept. Allowing companies to grow
at an inorganic rate is a very powerful thing, and
I learned that during my oculous days. I've seen it

(12:12):
happen at the companies that many of my friends have started.
I'm a big fan of it. In general, I'd say
that we get a little bit of money for development
from time to time from the deity, but what we
really look for them is demand signal. We say, listen,
you don't have to pay us to develop technology, but
we do need you to tell us if you would
buy something if we build it. And sometimes it's an obvious,

(12:34):
oh yeah, we would buy it out of that. Sometimes
it's I don't understand why I would buy this. You
have to explain to me why this makes any sense.
And then it's on us to explain why the thing
we're going to build is actually the better, cheaper, faster
way to solve their problem than the thing they're already buying.
And if we can get that demand signal, then we
can make the decision to invest our own money, so
the money of our investors, some of it is my

(12:55):
own money. When ANDERL started, we started in a warehouse
that I had already bought for the purposes of doing
this type of thing, So it allowed us to move
faster than the company that would have had to start
day one and then start raising money before they're able
to even get started.

Speaker 1 (13:09):
Is Intro profitable or are you making products.

Speaker 2 (13:11):
As we are not profitable. Now we have elements of
the business that are very profitable. Our successful products, our
mature products, the ones that we've been doing for the longest,
have followed a very repeatable and reliable path from investment
losing money, through to selling to the customer, making money,
paying back the initial investment, and now paying off cash
that we use in the rest of the business to

(13:32):
fund new research and development. All of the money that
we make we put back into new research and development.
That's another big difference between us and a lot of
other companies that are kind of at steady state, not
really growing. We could just stop working on all the
new stuff, kill all of our research and development, and
be a profitable company, but that's not what we set
out to do. We set out to be a major
defense prime and to do that, we're going to have

(13:52):
to keep reinvesting the profits. We're going to have to
keep putting more money into these new things, and I'm
confident that on some timescale all, or at least most
of those bets are going to turn into profitable business lines.

Speaker 1 (14:03):
It's like, how do you get there? You want to
go public? Your future peers potentially are public companies. How
do you get ready for that.

Speaker 2 (14:11):
The way that you get ready for that is building
a company that is the right shape to be a
publicly traded company. And that's a different shape than you
want to be when you're in the early markets in
the kind of high growth, high risk venture capital world.
You need to be a company that has not just
a history of growth, but a clear path for future growth.
You want to be a company that shows that you

(14:32):
can reliably turn investment on our part into a profitable
product line, and you want to show that the company
as a whole is profitable. These things sound trite, like
you gotta make money, you got to be good, But
I mean that's really what it boils down to, and
a lot of companies don't get there. There's a lot
of vc BAT companies that don't meet those requirements. They

(14:52):
just talk to you. They don't make money, and they're
not growing. There's a lot of companies that are growing
but not making money. You have to do both to
be successful as a public company.

Speaker 1 (15:01):
Well, as you said, the numbers are big. Microsoft got
a twenty two billion dollar contract to supply the US
military with HoloLens Boks reality headsets. As the creator of ocuus.
What do you think of that.

Speaker 2 (15:12):
There's been so many programs over the years to put
a radio and a head mount to display onto a
soldier and do really good stuff with it. It's a
science fiction staple for a reason. The ability to have
a set of glasses that tells you where the good
guys are, shows you where the bad guys are, shows
you where you're safe and where you're in danger. That
allow you to have visions so you can see through

(15:34):
buildings by using tracks that are fed by other sensors
in the air on other soldiers, making so that anything
that any sensor can see is something that you have
superhuman perception of. It's obviously the future of kind of
close combat. So I'm a huge Microsoft A. I actually
had a launch party for Windows seven at my house.
That's right, Me and my friends we all got together

(15:54):
at my house and we installed Windows.

Speaker 1 (15:56):
Seven together with your lightsabers, like I.

Speaker 2 (15:59):
Mean, that's a We have lightsabers, but we didn't bring
them out for the party specifically and making it even
nerdier as we were already running the release candidate software.
So we'd actually already been using Windows seven for the
better part of a year. At that point, but this
was officially activating the final release. And of course the
IVASS program is kind of born of the investments they
made in HoloLens. It's gone through some hiccups here and there,

(16:20):
but IVAS is actually one of the coolest programs going
on in DoD in my opinion. It's one of the
ones that is truly looking at what the future should be,
rather than just building an iterative upgrade for a legacy
system for the army, to say, we are going to
put tens of billions of dollars of our resources towards
something that is a radically new capability, you know, like

(16:44):
transforming soldiers from people who see with their eyes to
seeing with technology. I think it's as big as inventing
the aircraft carrier. It's as big as inventing nuclear submarines.
It's a really big bet that the future is going
to be different. DoD full of programs that are not
that that excites me.

Speaker 1 (17:04):
Some tech employees have pushed back on working with the
US government and the US military. Do you see where
they're coming from.

Speaker 2 (17:10):
So there's been Microsoft employees who are unhappy about this,
unhappy about IVAS and HoloLens. There's people at Google who
have been unhappy about it. There's people even in Amazon
who have been unhappy about it in terms of the
cloud services that Amazon's providing for the Department of Defense.
I think that if you're concerned about ethics and defense,
then you should be more involved, not less. If these

(17:30):
are the people who think that their opinions on how
wars should be fought are so strong, they shouldn't just
be on the periphery of these programs. They should be fighting.
Get transferred to the core of these programs. They can
make a difference fundamentally. I think it's an emotional thing.
They came to a company to work on consumer tech.
They weren't told that their work would be used for
potentially violence, and they don't like that. And I empathize

(17:50):
with that because to them it feels like a little
bit of a bait and switch. I came to Google
to work on advertising, and instead my work is going
to fight US adversaries and andre all. That's why we're
so clear with everybody about exactly what we're doing.

Speaker 1 (18:04):
Autonomous weapons powered by AI are now on the battlefield
more than ever, how advanced are they today?

Speaker 2 (18:10):
Not as advanced as they should be. Autonomous weapons not
necessarily using modern artificial intelligence techniques, though, have been deployed
for depending on the way you look at it, decades
or even longer by the United States. There's a long
history of autonomous weapons on the battlefield. So it's always
interesting when people say, oh, Man Panora's boxes being open.
You know, what are the ethical concerns? Has the deity
thought about this? The reality is the United States government

(18:33):
has more thinking on this and more experience with this
than any nation in the world, certainly than any think
tank in the world that's just now looking at these
issues for the first time. You get a lot of
pushback from people who are like in the United Nations
and saying, well, can't you agree that AI should never
pull the trigger, that an AI should never be making
the decision about when to kill and when to not.

(18:53):
And I totally reject that premise. It's a SoundBite that
sounds good in the moment, but then you have to
answer the question what's the moral hund and making, for example,
an autonomous land mine that is not allowed to differentiate
between a school bus of children and a Russian tank.
I think it should be allowed to make that distinction.
People say well, what if it's not perfect. Well, that's
a big problem. We need to make it so that

(19:14):
it is perfect. But in the meanwhile, I'd rather get
it right ninety nine percent of the time than zero
percent of the time with a dumb system that isn't
able to act in that way. That's why I'm working
in this space. If other people feel strongly about the
ethics and weapons, I encourage them to not sit on
the sidelines and critique, get in the arena and work
on them yourself, and show that you can make even
more precise things that are even less damaging to people.

Speaker 1 (19:37):
AI means a lot of things. Silicon Valleys in a
frenzy right now about LMS and generative AI, But that's
really different than the artificial intelligence you're using.

Speaker 2 (19:45):
Right anderl is at its core an AI company. We
have been since the beginning. Our first product we started building,
and the core product that applies to all of our
other hardware products is our software platform, which is Lattice,
an AI that fuses data, moves data. He gets all
the right information to the right people and the right
robots at the right time, detecting and classifying targets across

(20:06):
the battle space. Across every domain.

Speaker 1 (20:09):
Lattice is kind of like the control center right or
at the bridge between the hardware and a It's.

Speaker 2 (20:14):
Kind of like a distributed brain, and also the control interface,
and also the processor, and it's a whole bunch of
different things. It's everything you need to deploy autonomous weapons
at scale. It's doing target identification, it's doing communication between people,
communication between robots, deconfliction of good guys with bad guys
to make sure you're going after the right things. It's

(20:35):
really an all encompassing solution that powers all of our products. Now,
when we started as an AI company, it was at
a time in early twenty seventeen where AI was not
that hot. It was this weird thing that some of
the nerds were paying attention to, but there was a
lot of nasaying, a lot of people who believed it
was going nowhere, and we invest in it because we
believe that it was really really important for the future
of the DD. We believe we would not be able

(20:56):
to compete as a nation unless we had very cap
artificial intelligence applied to weapon systems. Now, the AI that's
going through this explosion is basically large language models as
kind of the dominant hype. They're very different from what
we're doing building an AI that is able to, for example,
look at all the weapon systems that you have in
an area connected to a mesh network, identify all of

(21:18):
the targets that you need to destroy in the very
near future, and then coming up with the optimal matchmaking
process between targets and weapons. Very different from what a
large language model is going to be optimized to do.
Much higher standards for accountability, much higher standards for explainability,
much higher standards across the boards for auditibility, and so
it's very different. But I will say the AI boom

(21:40):
has been very good for ANDROL in terms of convincing
people on less the technical level and more the spiritual level.
Like people who didn't believe in AI before are like, oh,
I've used chat GPT, I was able to make it
do things that I didn't believe a computer could do.
I now believe that Androl actually and build AI that
powers these things. This even goes to politicians. I'll meet

(22:03):
with people in Congress who have been skeptical for years
about ANDROIL and about whether we could really build these
capabilities and have them be useful, and they say, yeah,
I use chat GPT and I asked it to write
me a recipe based on the food in my refrigerator,
and it did it. I think I understand what you
guys do. Now.

Speaker 1 (22:21):
You're all writing the same wave.

Speaker 2 (22:23):
We're all writing the same way.

Speaker 1 (22:24):
Well, an Androl is a hardware company and a software company.

Speaker 2 (22:28):
That's hard, right, Well, there's the old saying. Hardware is hard.
Software is where you can build a lot of the
most durable advantages. The United States used to be able
to build things that would fly twice as fast as
our adversaries, and we'd be twice as fast for a decade.
Those days are gone. Hardware advantages like that are going
to be quickly copied by our adversaries. So a lot

(22:49):
of the most durable advantages we build, for example, using
software to make decisions twice as fast or ten times
as fast, or using software that allows you to be
ten percent better in the strategic process than your adversari
who doesn't have the better software tool is a capability
that I don't think our adversaries are close to copying.

Speaker 1 (23:05):
There are concerns that AI could deepen the fog of war.
What do you think about that now?

Speaker 2 (23:10):
I super disagree. I think AI is going to give
everybody a very clear picture of what's on the battlefield.
I think that the benefits a crewe to the sensing
side much more than the fogging side. I think AI
is going to be a tool to put all the
cards on the table for everyone, for everyone understand just
how powerful the US is. My hope is that you're
going to have dictators who make better decisions because even

(23:32):
they have better information from AI. Let's use Putin as
an example. I don't think he would have launched this
invasion in Ukraine if he would have understood what was
actually going to happen. Remember, they believe this is like
a three day special operation. They were going to roll in.
It was going to be over very very quickly, and
it was going to be a huge political victory military victory.
I think that if he had a better understanding of

(23:54):
what was going to happen, I think he probably would
not have made the play. And then if he had
believe that he had the better play, the United States
could have had greater confidence in that as well, and
we would have reacted. So I think AI is actually
going to bring a lot more clarity. I think that
it's going to make warfare more like chess, and that
everyone can see what everyone's doing on the board. The
only unknown is what's in their mind.

Speaker 1 (24:15):
What are the implications of a more transparent battlefield. Does
it mean fewer faster wars?

Speaker 2 (24:22):
I think making the global stage more like a chessboard,
where you can see what everyone has and understand the
capabilities of what you can see quite well, is actually
a net stabilizing force. We've seen this with nuclear weapons.
Nuclear weapons and the concept and the doctrine of mutually
assured destruction worked because it was so easy to model

(24:42):
out you know, it was actually not that many pieces
on the board we're going to determine the outcome, and
it was quite easy to model what would happen in
these many scenarios. Conventional warfare is too messy for that.
But I think that artificial intelligence could lead us to
a world where we have much greater confidence in what
wars are fightable, which wars are winnable.

Speaker 1 (25:01):
So are you innovating yourself out of a job, like
if you're trying to If the goal is fewer wars,
that means fewer countries buying your technology, that means less
need to make the technology that means less, Well, it's
money for Anderil.

Speaker 2 (25:15):
I mean, honestly, I think there's a little too much
money in defense right now. So I'm like, there's a
world where the defense budget goes down and Anderill goes up.
I'm not saying that that's necessarily what the United States
government should be doing, but our whole thing is that
we build these systems at a much lower cost, and
we're building also largely defensive systems. Now, if I was
building defensive systems that lasted forever and were forever better

(25:38):
than all of our adversaries and could never ever be beat,
then yeah, I would be putting myself out of a job.
I have not yet come up with such a system,
but the second that I do, I'm going to be
very excited, because I would love to put myself out
of a job, Like I would love to live in
that post historical world that everyone thought we were living
in in the early twenty teens, but we don't. That's

(25:59):
not the world we live in, and so I have
to deal with reality, and human nature doesn't seem to
be changing fast enough. I suspect that I'll be gone
before war.

Speaker 1 (26:06):
Is there's a lot of fear out there. Have you
seen the Black Mirror metal Head episode.

Speaker 2 (26:11):
I've seen every episode of Black Mirror.

Speaker 1 (26:13):
The killer robots turning on us? Is that possible?

Speaker 2 (26:16):
The thing about fictional scenarios, like in Black Mirror or
in science fiction novels, is that they are fundamentally oriented
around telling a story, sometimes about the technology, but also
there has to be conflict. There has to be a
good guy, there has to be a bad guy. I
know the film nuts are going to say that's not true,
but that's what most people want the story.

Speaker 1 (26:36):
So, and that's the world we live in, as we
just discussed, oh it is, But.

Speaker 2 (26:41):
I'll give a specific example. I'm friends with Ernest Klein,
the author of Ready Player one, and in Ready Player one,
virtual reality is this technology that has sort of destroyed
the world economy. Everyone's living in stacks of trailers and slums,
and because that's the only world that matters, and the
whole world's kind of falling apart as a result. Another
one of my favorite books, The Unincorporated Man, has basically

(27:03):
the same premise. Virtual reality technology is outlawed because it
destroyed the world's economy by making the market for physical
goods disappear, among other problems. I've spoken with Ernie and
I've spoken with lots of other authors, said, hey, do
you actually think that this is what's gonna happen? They say, oh,
hell no. I love virtual reality, but I have to
tell a story, and a story where imagine a story

(27:24):
in the future. There's no conflict. Tech is very advanced,
people don't have to work nearly as hard, and there
are amazing VR games that everyone just plays all the
time and everything's pretty great. That's not an interesting book.
Nobody's gonna make a Hollywood adaptation out of that. I
think that it's the same thing with Black Mirror, Like, yes,
it can be a useful lens to look at the future,
but they're not trying to come up with the most

(27:45):
likely conflicts. They're not even trying to come up with
likely problems. They're taking really the least likely problems and
highlighting them as the thing to be concerned about. In
my mind, it's counterproductive the amount of attention we focus
on these fiction a lized scenarios when there's actually much
more threatening things like, for example, artificial intelligence being used
to generate novel biological warfare agents or chemical warfare agents

(28:09):
in a garage. That's actually what the problems are gonna
look like. And so it actually drives me kind of nuts.
It really grinds my gears when you have people in
the United Nations putting up pictures from Terminator movies and
me like, this is what's coming. We have to make
sure this doesn't happen. What the heck are you talking about?
That's not even worth talking about. Yes, it could happen theoretically,

(28:29):
maybe someday, but it's like one hundred on my list
of things to be concerned about, and there's so many
more concerning things, like you know what, I'm more concerned
about bad people with reasonably smart AI doing evil things.
I'm way more concerned about that than super smart AI
doing evil things on its own without people involved.

Speaker 1 (28:49):
There are a lot of thorny ethical questions. We're talking
about a possible future of self guided bombs and killer
robots and algorithm deciding who to kill, who is liable
humanism in the loop.

Speaker 2 (29:01):
First of all, I'd say it's not a future of that,
if it's a present of that and a past of that.
We have a long history of totally autonomous weapons deciding
when to kill and when to not. I mean, in Vietnam,
we deployed radar seeking missiles at scale that would fly
over the horizon, look for emissions that they believed were
correlated with a particular target and decide to either go
after it or fall into the ocean. Land mines are

(29:22):
fundamentally autonomous systems, and that they're being designed to analyze
electromagnetic profiles and decide if something is a military asset
or a civilian asset, decide if they're going to PLoP.
We actually have a long, long history of autonomous weapons.
The key is that a person is responsible for the
deployment of those systems, that a person understands the limitations
of those systems. The existence of an algorithm, or of

(29:44):
an automated fusing system or an automated target acquisition system
cannot replace human responsibility for deploying that weapon system, and
it has to be a person who deeply understands the
limitations of that system and who's going to be held
to account when it goes wrong. There will be people
who are killed by AI who should not have been killed.
That is a certainty. If artificial intelligence becomes a core

(30:07):
part of the way that we fight wars. We need
to make sure that people remain accountable for that, because
that's the only thing that will drive us to better
solutions and fewer inadvertent deaths, fewer civilian casualties. And again
I don't want AI to do these things, but a
lot of times the existing technologies.

Speaker 1 (30:22):
Are much worse an AI start a war.

Speaker 2 (30:25):
I think that anything in deody could start a war.
You could have a single rogue person caused something to
happen that starts to war. You could have a malfunctioning
weapons system start a war. You could have a miscalibrated
weapensess or WEAPENSYSM that's programmed with the wrong target information
start a war. There's a lot of ways you can
inavertently start a war. I think AI makes it less
likely that those things happen, and not more likely. It

(30:47):
adds another safeguard in the chain. For example, you could
have a system that is going to confirm not just
radiation signatures that it's going after, but also other things.
You could have an onboard AI saying okay, I see
an electronic signature from that, but I also see it
in the thermal spectrum, in the visible spectrum. I'm literally
doing video analysis of it, and it might say, oh, shoot,
that's not what I'm actually looking for. Despite having a

(31:08):
similar radar signature, I'm going to wave off. And so
in almost every case that matters, AI is going to
be an additional safeguard against problems and increasing accountability.

Speaker 1 (31:18):
Changing tax You started with border security, How is your
technology being used at the border now?

Speaker 2 (31:23):
Today? Our systems are deployed all along the US southern border,
our northern border, and in a whole bunch of other places.
We've actually saved a lot of lives. We've prevented a
ton of criminal activity, drug trafficking, sex trafficking, things that
have been going on for decades without enough of a
technological counter to them. And the common complaint people have is, oh,

(31:43):
but I don't like Androl working on border security because
my preferred immigration policy does not align with that of
the United States government. The point that I would make
them is a border security and immigration policy are related,
but very separate issues. Even if you want different immigration policy,
even if you want every person in the entire world
to be able to walk across our border, get a passport,
get citizenship, and you know, get a freehouse and a

(32:06):
free car, like the most extreme version of a caricature
of what someone who wants open borders would want. Even
that person should want border security because even if you
have full open immigration, you still need to know when
weapons are being moved back and forth across the border.
You need to know when people are being trafficked into
human slavery back and forth across the border. You need

(32:27):
to know when people are moving fentanyl lace to marijuana
that is killing people who didn't even know that they
were getting a laced product. These are things that need
to be stopped with border security, regardless of immigration policy.

Speaker 1 (32:39):
Well, another thing the critics that you know, critics see
the technology or building warrior. It could be misused on
American citizens for example. Are they right to worry?

Speaker 2 (32:47):
Oh, of course anything can be misused. But if you
want to point to things that can be misused against
American citizens, I mean, the military has a lot of guns,
the military has a lot of aircraft. The right place
to control what our military can do is at the
policy level, at the elected leader level, not at the
technological level. We can't say a person might shoot somebody.

(33:09):
They shouldn't, So the military shouldn't have guns. The military
might surveil someone they shouldn't have surveilled. They shouldn't have
surveillance airplanes. It's just not a tenable position. That's what
leads to NATO being armed with score guns. You have
to have trust in the system, you have to believe
that democracy works, and you have to believe that the
right way to control these is on the policy side,

(33:32):
not saying you're not allowed to have the capability to
deter Russia because you might use it against US citizens.
If that happens, we're broken in ways that holding back
weapons is not going to solve.

Speaker 1 (33:43):
Speaking of democracy, and Durrell's mission is to strengthen America,
there's a clear argument to be made that former President
Trump supercharged polarization in America is another Trump presidency good
for America.

Speaker 2 (33:56):
It's pretty early to be commenting on what their presidency
might mean one way or the other. I mean, I'm
a Republican, so if Trump wins, I'm going to be
supporting Trump. Now does that mean that I think Trump's perfect? No,
But in a world where you make compromises and trade offs,
you select the candidate that is most aligned with what
you want the world to look like the DoD is
pretty a political It's not perfect at it, but they
really do pride themselves in trying to be a political

(34:18):
because they know that these problems that we face as
a country they transcend both the politics of the White
House but also the timelines of the White House. In
a world where it's flipping back and forth every few
years and you have weapons programs that need to stay
on track for decades, you kind of have to have
an a political organization that's willing to work with both sides.
It's willing to work with both parties. Andrel made a

(34:40):
lot of money under President Trump. We've actually made a
lot more money under President Biden, and we're going to
make even more money under whoever's president next.

Speaker 1 (34:49):
So even after everything, you still vote for Trump. Do
you hope he's not the nominee? Do you think there's
a better candidate.

Speaker 2 (34:55):
I'm actually in decline, not because I don't have an opinion.
I have an opinion, but I think that going out
and saying my opinion would be detrimental to my political objectives.
The thing is, I'm actually not nearly as political of
a person as people think, Like, yes I am.

Speaker 1 (35:09):
I mean people think you got kicked out of Silicon Valley.

Speaker 2 (35:11):
Well, but here, I mean, let's look at this, like, yeah,
I got kicked out of Silicon Valley because I made
a nine thousand dollars political donation. I have an opinion
as to who should be president. A lot of people
think that's controversial, but like, let's not forget I supported
the guy who became president of the United States. And
for a lot of people, that's the thing they want
to talk about. Like here we are on TV, talk
about Palmer. You supported the guy who became president of
the United States. That's so interesting. Let's talk about it.

Speaker 1 (35:34):
So you'd be in a defense context, it makes more
sense because it is a political issue.

Speaker 2 (35:38):
It's possible, but I think we would have gotten pasted
a lot faster. The reason that people pay attention to
it at the end of the day is because it's
novel for a person in tech to have supported the
person who became president that year. I don't think that's
that controversial. Here in annerl we have a lot of Democrats,
we have a lot of Republicans, we have a lot
of libertarians, not very many communists, But generally we have

(35:58):
a lot of political diversity. Here. That's because what we're
doing is way more important than the politics of it.

Speaker 1 (36:03):
Which countries are you willing to sell your technology too?
And who won't you sell to.

Speaker 2 (36:07):
I'm willing to sell more or less to anyone the
US government wants me to. And this is another one
of those areas where I have to trust that the
democratic process is going to come to the right conclusion.
If I don't believe in it, if I just say no,
I'm going to decide who ANDERL sells to, it puts
our country in a pretty bad position, especially if all
of our weapons sales in the country. That way, What
if the United States says we need to aid this

(36:29):
nation for strategic reasons, and then every weapons ceo said no,
I don't want to. I don't like that country. That'd
be a disaster. That means the president's not in charge.
That means Congress isn't in charge. Means a handful of
billionaires are in charge. I don't think anybody particularly wants
a handful of billionaires to be directly in charge of
the military decisions of our nation. There's also a lot
of situations that seem crazy to people because they don't

(36:52):
have all the information including to myself. You know, there
was a big brew haa haa a long while back
with the US selling certain defensive weapons SEMs to Turkey,
and people were saying, Oh, we shouldn't sell them those,
We shouldn't give them those because they might use them
in bad ways. And then it later came out publicly
this is not controlled information that we had nuclear armed
aircraft on that particular airbase in Turkey on standby. Now,

(37:16):
the US can't go out and announce to the world, Hey,
the reason that these guys are selling this particular capability
is because we have an extremely important, tactically relevant reason
to do so. In that situation, you kind of have
to trust that people are doing it for a good reason,
that they're doing it for a reason it's aligned with
US interests, that's aligned with the interests of our citizens.

Speaker 1 (37:33):
You went to Ukraine a couple months into the war.
You met with President Zelenski personally. What did you see?
What did you learn?

Speaker 2 (37:40):
Well, I saw a lot of stuff. This wasn't the
first time I've met with Zelenski. I had actually talked
with him prior to the war here in the United
States about potentially using our border security technology on Ukraine's
eastern border to help track Russian systems when incursions might
be happening, when build up might be happening, and unfortunately
we weren't able to make it. I saw a lot
of things in Ukraine, some of which I can talk about,

(38:01):
some of which I can. You know, we've had weapons
there since the second week of the war, we've been
involved the entire time. But one of the things that
really struck me was that the Russians that were going
into Ukraine really did believe that they were the good guys.
And they really believed that because they had been fed
a pack of lies. That made them willing to do
things that they never would have done had they understood

(38:23):
the truth. There are people there who came into that
fight with literally four days of clothes and a parade uniform.
That's what they thought they were going to need in Ukraine.
They thought they're needed four days of clothes and food,
and then they were going to be a parade in
the streets for them because everyone's going to be all
over them. And the reason that's made such a big
impact on me is it made me realize that Russia's

(38:43):
most powerful weapons system isn't any piece of steel. It's
actually the control of their media apparatus that allows them
to raise an entire generation of young Russians who believe
these crazy things.

Speaker 1 (38:55):
Ukraine has become sort of a testing ground for low cost,
scalable technologies. How is this changing the nature of war?
I mean, this is happening as we speak.

Speaker 2 (39:04):
Imagine how different the war in Ukraine would be if
they could deploy thousands of drones all at once that
didn't have to have a dedicated pilot. If you could
have one person managing hundreds or thousands of drones in
an attack. That's the technology that Nanerl's building, and that's
the technology that will take what's going on in Ukraine
and put it on steroids. I think that that's what

(39:25):
we're going to need to deter conflicts in the future.

Speaker 1 (39:28):
There are reports of drones attacking targets in Ukraine without
human control. What have you heard about this?

Speaker 2 (39:35):
I know a lot about it, but this isn't the
right venue to talk about it.

Speaker 1 (39:38):
But that is sort of the fear of the roboca,
you know, Like right, I can't.

Speaker 2 (39:43):
Talk about operational specifics of how our systems or other
systems might be used in the specific tactics that are
going on, but I'm aware of a lot of autonomous
weapons being used in a lot of different places, not
just Ukraine, and my thoughts on it are generally big
thumbs up. I am a big fan, for example, of
having drones that are taking out a guy who's trying

(40:03):
to kill me, not being reliant on a direct communications link.
Because if my drones relying on communications link, all he
has to do to kill me is jam my communications.
This is a huge advantage of autonomy. It means that
my weapons can go over there and get that guy,
even if he has an electronic warfare system that makes
it impossible for me to fly my radio controlled drones.
I don't feel bad about that. I think it's great.

(40:24):
I think it's fantastic. It means that Russia can't sever
everything that we make with the push of an electronic
warfare button.

Speaker 1 (40:30):
China and Taiwan, how does this play out?

Speaker 2 (40:34):
Everything that Anderl is working on on the R and
D side is oriented towards that fight. The term the
Dey uses is a great power conflict or a fight
in the Pacific, and what that means is fighting with
China over Taiwan or at least being in the area
to scare China from going after Taiwan. There's a lot

(40:55):
of ways this could play out. Will they move quickly,
will they move slowly? Will it be a trade blockade
that escalate into an invasion. What we do know is
the only way to win this war is to build
things that make China have the belief that they cannot
take Taiwan without a cost that is unacceptable right now.
China does not believe that. China looks at what we have,

(41:16):
and they look at what we're willing to do, and
they look at our posture, they look at what Taiwan has,
and they say, we will reunite by force if necessary,
and we're going to be able to do it in
the next few years. We have to change their mind.

Speaker 1 (41:29):
What is and Urall's role here? Who are you partnering
with and what are you deploying.

Speaker 2 (41:33):
Or partnered with? Every branch of the United States DoD
building things that they tell us they desperately need to
win a great power conflict, a fight in the Pacific
to deter a fight with China that nobody wants to fight,
but that we need to be able to win. And
we work very closely with them on this. We got
great ideas, we've got great tech, but at the end
of the day, we are working hand in hand with

(41:53):
the people who know what we need to build to
stop this fight, so that we can try to over
the course of the next few years build it. I'm
probably going to eat these words, but if China ends
up invading Taiwan and things go the worst for them,
I'm going to feel like we've really failed in our mission.
I'm going to feel like we've failed in what we're
doing because in the same way China is focused on

(42:15):
the Taiwan conflict as they're kind of driving force for
building their military, and Rail is focused on it as well.
Taiwan is not just strategically important, not just morally important.
It's economically extraordinarily important, and allowing Taiwan to fall is
probably the worst signal that we could ever send to
the rest of the world. There are places in the
world you can debate whether we should be involved. I'm

(42:37):
generally a non interventionist. Taiwan is not the place to
play that game.

Speaker 1 (42:41):
You actually spend a lot of time in China working
on Oculus headsets. What do you know about China's capabilities
in AI. What don't you know?

Speaker 2 (42:49):
I mean, I know quite a bit, some of which
I can say, and someone which I can't. Obviously, I'm
building things that are designed specifically to go up against
China's AI systems, So I know more than I can
let on. But I did. I spent time in China
back in the Oculus days because that's where we did
our manufacturing, and we didn't really have a choice. I
deeply understand how dependent our country has become on Chinese manufacturing,

(43:11):
Chinese engineering, Chinese supply chain materials. It's really extraordinary how
they've pulled themselves up from almost nothing to being an
economic superpower that is on track to surpass the United
States in such a short period of time. And we
did this too. We're the ones that gave them the blueprints,
We're the ones that gave them the tech. We're the

(43:32):
ones that shipped all overseas, and I'm part of the problem.
I'm one of the guys who did it. We tried
to do our manufacturing in Mexico at one point, and
we actually did do some early Oculus manufacturing in the
US and Mexico. We just weren't able to make it
work without China, and so I have sympathy for companies
that do end up in China. But my strong advice

(43:54):
to new companies is do not build a company that
is dependent on China.

Speaker 1 (43:57):
Do you worry that China is outpacing us on technological innovation?
Could the US military lose its edge to China.

Speaker 2 (44:07):
Well, depending on who you ask, China has between fifty
times and three hundred times the military shipbuilding capacity of
the United States. This is a huge problem, especially if
you're fighting a war where you lose all your ships
and it takes you decades to rebuild. They lose all
their ships and they rebuild the same year. This is
really inarguably an area where China has outpaced the United States.

(44:30):
Now they have an outpaced us everywhere, but in a
lot of the areas that matter. For a fight in
the Pacific, they are kicking our ass, and the United
States is not going to be able to win by
following the same strategy they do. We're not going to
be able to build enough shipyards and train enough welders
to build three hundred times more ships. That's off the table.
So we have to win with our brains.

Speaker 1 (44:51):
The stakes are so much higher here than strapping on
a headset and playing in the metaverse. How do you
think about the human cost, the human toll of what
you're building?

Speaker 2 (45:01):
How do you deal with the human toll? You have
to deal with the dilemma that people have dealt with
for millennia. War is hell. People die, and our goal
has to be to try to minimize that. Our goal
has to be to build the things that prevent it
to the best extent possible. And some people who have
a different moral framework internally will say, well, how could

(45:22):
you live with yourself if somebody got killed by one
of your systems? And like, I'm not stoked about the
fact that, you know, let's say an autonomous weapon blows
up a Russian tank crew, but I also know that
it's the right thing to happen. I can't say I
lose much sleep over it, because in a world where
things like that need to be done for the greater

(45:42):
good and to save the lives many more lives that
could stem from not making that decision. It's a really
easy decision for me.

Speaker 1 (45:49):
What if your technology kills the wrong person? Would you
take responsibility for that? Would you apologize?

Speaker 2 (45:55):
Well, it would depend on the situation. Didn't kill the
wrong person because someone in the military shot at the
wrong person. Is it because it was attacked or hacked
by an adversary using some components that China implanted in
their supply chain to make a command and control system.
It's a hard hypothetical, but like if a system killed
somebody who should not have died, and it's entirely my

(46:16):
fault personally, Palmer Lucky it was everyone in the military
did their job, everyone on the deployment side did their job.
Everyone in my company that it was my personal flower. Yeah,
I feel awful about it, but it wouldn't make me say, well,
I guess we shouldn't build the tools that Taiwan needs
to save themselves from being taken over from China. I
don't think that anyone who works in this space can
afford to have that opinion. Every weapons company has made

(46:39):
weapon systems that have malfunctioned at some point. Imagine if
the bar for withdrawing from that duty was getting it
wrong one time. The stakes are high. That's why we
have to do it. That's why we have to keep
doing it.

Speaker 1 (46:52):
So big picture, what does the future of warfare look like?
And what is Andurrell's role in it.

Speaker 2 (46:59):
The future of warfare is going to be defined by
large numbers of autonomous systems, managed at a high level
by people who are able to focus on what people
do best, leaving to robots what they do best. Every
sensor is a censor for every weapon. Every weapon is
an appendage of every person who needs to use them,
and the enemy will not be able to deny us

(47:19):
our communications. They will not be able to deny us
the big picture, we will have full awareness of what
we are doing, why we are doing it, and confidence
that is the best way to accomplish our strategic games.
That's what I think. The future of warfare looks like.
It's an ambitious future, but we're building the tech that
can make it happen.

Speaker 1 (47:37):
Thanks so much for listening to this episode of the Circuit.
You can watch the full episode featuring Palmer Lucky on
Bloomberg Originals. I'm Emily Chang. Follow me on Twitter and
Instagram at Emily Chang TV. You can watch new episodes
of the Circuit on Bloomberg Television on demand by downloading
the Bloomberg app to your smart TV or on YouTube,
and check out other Bloomberg podcasts on Apple Podcasts, Spotify,

(48:00):
or wherever you listen to your shows and let us
know what you think by leaving a review. I'm your
host and executive producer. Our senior producers are Lauren Ellis
and Alan Jeffries. Our editor is Alison Casey. See you
next time.
Advertise With Us

Host

Emily Chang

Emily Chang

Popular Podcasts

Dateline NBC

Dateline NBC

Current and classic episodes, featuring compelling true-crime mysteries, powerful documentaries and in-depth investigations. Follow now to get the latest episodes of Dateline NBC completely free, or subscribe to Dateline Premium for ad-free listening and exclusive bonus content: DatelinePremium.com

On Purpose with Jay Shetty

On Purpose with Jay Shetty

I’m Jay Shetty host of On Purpose the worlds #1 Mental Health podcast and I’m so grateful you found us. I started this podcast 5 years ago to invite you into conversations and workshops that are designed to help make you happier, healthier and more healed. I believe that when you (yes you) feel seen, heard and understood you’re able to deal with relationship struggles, work challenges and life’s ups and downs with more ease and grace. I interview experts, celebrities, thought leaders and athletes so that we can grow our mindset, build better habits and uncover a side of them we’ve never seen before. New episodes every Monday and Friday. Your support means the world to me and I don’t take it for granted — click the follow button and leave a review to help us spread the love with On Purpose. I can’t wait for you to listen to your first or 500th episode!

Music, radio and podcasts, all free. Listen online or download the iHeart App.

Connect

© 2025 iHeartMedia, Inc.