All Episodes

February 24, 2020 83 mins

Should Artificial Intelligence be able to make the decision to take a human life? And if it does, who will be liable if — or when — it goes wrong? When it comes to the future of war and technology, the ethics are murky.


Tech is creating a new arms race. Will the U.S. be able to keep up with the likes of China and Russia? And what ethical lines will we draw, or cross, to maintain our national defenses?


Let’s rewind to Orange County circa 2017: A handful of entrepreneurs — eating Chick-fil-A and Taco Bell — sat around a table exploring the idea that what the United States needs is a real-life version of Stark Industries. Yes... from Iron Man. That brainstorming session led to Anduril — a defense technology firm that’s since become a billion dollar company at the center of the debate around the future of war.


Laurie Segall sat down with Anduril’s co-founder, Trae Stephens, who spends a lot of time thinking about the philosophy of war and how technology is transforming it. In this episode of First Contact, we explore a framework for redefining war — where the front lines of futuristic battlefields are blurred, and technology is leading the charge. Expect rigorous debate. Unpopular viewpoints. And uncomfortable scenarios.

Learn more about your ad-choices at https://www.iheartpodcastnetwork.com

See omnystudio.com/listener for privacy information.

Mark as Played
Transcript

Episode Transcript

Available transcripts are automatically generated. Complete accuracy is not guaranteed.
Speaker 1 (00:00):
First Contact with Lori Siegel is a production of Dot
Dot Dot Media and I Heart Radio. I guess I
come back to this line. Maybe it's a little dramatic,
but it's like a I to kill or not to kill? Right?
Like this idea that you guys could be building out
autonomous systems that can make the decision to to kill? Um?

(00:24):
Will you do that? It's very very hard to predict
the future, but to the extent that the tech is
deployed as a last resort and it ensures that human
flourishing can continue in a more abundant way. Absolutely, to
kill or not to kill? Should artificial intelligence have the

(00:46):
power to make that decision? And if it does, will
be liable if or when it all goes wrong? Tech
is creating a new arms race. Will the United States
be able to keep up with the likes of Russia
and Nina? And what ethical lines will we draw or
cross to maintain our national defenses When it comes to

(01:07):
our future, the ethics of war and technology are murky. Now,
with that in mind, I want to take you to
Orange County. Picture a handful of entrepreneurs, some of them
are controversial, sitting around a table they've created a power
point outlining some radical ideas for the future of defense technology.

(01:29):
They're eating Chick fil A and Taco Bell and exploring
the idea that what the United States really needs is
a real life version of Stark Industries from iron Man
to build defense technology that would protect the United States
from future threats. Fast forward that brainstorming session, lad to Andrew,
a defense technology company that launched in Now it's a

(01:53):
billion dollar company and at the center of the debate
when it comes to the future of war. One of
its founders, Trade Steven, spends a lot of time thinking
about the philosophy of war, how technology is transforming it,
and how we're going to protect ourselves as a nation.
Expect vigorous debate, unpopular conversations, uncomfortable scenarios, some talk of

(02:15):
superheroes and science fiction, and a framework to talk about
war where the front lines of futuristic battlefields are blurred
and technology is leading the charge. I'm Laurie Siegel and
this is first contact. Well, generally I talk about my
first contact with folks, and I have no first contact

(02:37):
with you. This is the first time, tre that I've
ever met you. Uh So, welcome to first contact. Thank
you for having me. But this isn't my first contact
with this idea of like war and ethics and defense technology.
Because I know your co founder, Palmer Lucky. I've interviewed
him many times. He's an interesting dude. He is, yeah,
super super interesting, super unique guy. How would you describe

(03:01):
I think he's one of these people that's thought deeply
about most things. Rather than having, you know, high conviction
but shallow knowledge, he has high conviction and lots of knowledge.
There seems to be this high correlation between people in
the tech community that are kind of like obsessively passionate
about topics and slightly eccentric and personality that makes them

(03:24):
a great fit for entrepreneurial endeavors. And Palmer is totally
one of those people. He's the founder of Oculus we
should mention for our listeners, which is virtual reality company.
Oculus sold a Facebook for three billion dollars. And I
remember my first time meeting him was at Web Summit.
It was years ago in Ireland, and he always wears

(03:44):
Hawaiian shirts and he always says things that they definitely
get him into trouble. And he ended up leaving Facebook
because of some controversy and starting this company that you
started with him that we're going to get into all
about defense technology. Wich is also interesting and controversial in
its own right. And it's called Andrew and I remember
the last time I was at Yale's headquarters, it was

(04:06):
probably a year and a half ago. He's like walking
around barefoot. I just want to paint the picture because
it is quite fascinating of all sorts of like really
fascinating technology that's like the future of warfare. And it's
in l A, it's not in Silicon Valley. You have
a founder with a Hawaiian shirt who's like kind of
like a billionaire. I think he's a billionaire, right, it
depends on who you ask. Yes, okay, unconfirmed, but multi

(04:29):
multimillionaire um. And you guys are just dealing with these fascinating,
fascinating issues that a lot of people in Silicon Valley
shy away from. So that's all kind of set it
all up. Was that? Is that a fair way to
set it up? Yeah? I think the only thing that
Palmer would be disappointed if I didn't point out is
that Palmer did not leave Facebook. Palmer was fired from Facebook.
That is an important distinction. But yeah, I mean he is,

(04:53):
as I said before, eccentric. He very rarely wears close
toad shoes. I've seen him get in trouble with this before.
We were on an off site together and he was
playing laser tag on a mountain bike, which is something
that I guess you do, and going downhill very rapidly
and went over the handlebars with his Hawaiian shirt open

(05:13):
and barefoot of course, and ended up like pretty pretty
badly scraping up his chest and cutting his toe on
I'm assuming the pedal. But the kind of cool thing
about him is that he has that playful eccentric side,
but he's also incredibly serious and so there are very
clear reasons why he was on the mountain bike, why

(05:34):
he was charging downhill, what his strategy was for doing that.
So you know, this is like the kind of the
combination that I think makes him so unique. Is it.
It's deep seriousness, deep strategy, but also eccentric and playful.
When we talk about Palmer's background, your background is like
completely different in a fascinating way, and so as co

(05:54):
founders is super interesting because your background is in intelligence, right, um,
and I saw that you were a student during nine
eleven and I read that that deeply impacted you, and
you ended up working for government, so you didn't quite
have the Silicon Valley background that a lot of the
people you're working with had, right. Yeah, I was a

(06:15):
senior in high school when eleven happened. Prior to that day,
I thought I was going to go into journalism, actually,
and I remember sitting in my principal's office in high
school watching the TV and talking to him about what
it meant for the world, and really decided on that
day that I wanted to go into a career in
service to the country. Had kind of toyed with whether

(06:38):
or not that meant going to the Air Force Academy
and trying to be a fighter pilot or going into
the intelligence route, probably via George Town, which is where
I ended up going. Ended up doing the intelligence thing,
mostly from a probability perspective, just realized that I might
not end up getting that fighter pilot billing and could
have ended up doing something far less interesting in the

(06:59):
Air Force. But yeah, it all kind of goes back
to that day. And talk to me about the government days,
What exactly what are you doing? Well, obviously I can't
say exactly what I was doing. If we were going
to talk around it in code? Could you we were
going to talk around it? I was working in the
counter Terrorism Mission UM specifically focused on computational linguistics for

(07:22):
Arabic script based languages. Okay, I'll explain all of uh. So,
basically I studied Arabic in college and one of the
things that you find very quickly is that the language
works very differently than the Romance languages, and the naming
conventions are way harder to figure out and they're less

(07:42):
unique than English names. So the most common name in
the English language, I think it's John. It's like three
of all English speaking men have the name John, either first,
mid or last. In the Arabic speaking world, something like
over of Arabic speaking men have the name Mohammed. So
that makes name matching incredibly challenging. So if you have

(08:02):
a name like Mahmuda Bas, that could be literally like
hundreds of thousands or millions of people with that exact
name Muda Bas. And so the job for an intelligence
officer is significantly more challenging when dealing with these foreign
languages because you have to figure out is this person
actually the person that I'm trying to get reporting on

(08:24):
or is it just another person that has the same name.
And so I was working on trying to resolve those
entities UM with one another across a vast amount of databases,
which is a pretty tough problem. So and was there
anything that was like, Okay, I'm leaving the government, I'm
going to Silicon Valley and working at Palanteer. Was there
any anything that happened that was kind of the catalyst.

(08:46):
I think it's just like the grinding down of bureaucracy.
I certainly never thought that I was going to work
for a Silicon Valley venture backed tech company UM when
I graduated from college. That was never part of the plan.
In fact, I remember joined Palenteer and asking really crazy
questions like what is a stock option? What is equity?
Why does that matter to my comp what is a

(09:07):
vesting schedule? I knew nothing about how these things worked,
and I think basically coming to the conclusion that I
wanted to be at a place where I was surrounded
by a bunch of people who were way smarter than
I was, who were much better at me at the
things that I was bad at UM, where I could
add unique value, and that moved significantly faster than the
bureaucracy of the government, all the while still being able

(09:30):
to work on the mission that was really important to me. Palenteer.
For folks who don't know associated with Peter, t like,
did you meet Peter or was it that you got recruited?
How did that work? I did not meet Peter. I
actually got into Palenteer from the guy that gave the
demo to me when I was working in the intelligence community.
It was a really small team at the time. I

(09:51):
think they were probably like less than twenty people at
the company, and I saw a demo, got really excited
about it, kind of pushed internally to be able to
use it. Well, told no, and then ended up, you know,
jumping ship and joining the company really early on. And
how did you end up meeting Palmer? Yeah? So I
got to know Peter really well over my time at
palent here, and in two thousand and thirteen he asked

(10:13):
if I would be interested in coming to join Founders Fund,
which is a Silicon Valley venture capital firm that Peter
and some of the other PayPal founders started together. And
we should also say like Founders Fund for folks who
don't know's it's an interesting venture capital's fund in Silicon
Valley because they say unpopular things sometimes or invest in
things that are not just standard, Right, Is that a

(10:36):
fair thing to say, Yeah, that's totally fair. We haven't
a manifesto on our website that I think when it
first came out was pretty unpopular. People thought we were
a little crazy. The cool thing about it has been
over the last really fourteen fifteen years that the fund
has been around, these ideas have become actually kind of
weirdly popularized. What was the manifesto. It's a kind of

(10:59):
long form essay on how we're thinking about the future
and how tech has stagnated and how venture capital is
an industry hasn't served entrepreneur as well, and kind of
diving into all of those reasons. But you see some
of these things cropping out of it, like the founder
friendly ethos of a venture fund. This idea that actually
the founders are the most important single unit, elemental unit

(11:23):
of a successful venture and you should as a fund
do the things that are optimizing for helping those people.
That has become super common, Like everyone talks about being
founder friendly as a VC, but you still see really
great funds that are firing CEOs that are taking aggressive
governance stances against them, and this is something that we

(11:45):
foundationally will not do. We will not vote against founders.
We went up fire founders. We believe that we should
be optimizing for upside and not trying to mitigate mediocre outcomes.
You know, there's a bunch of other, like more specific
ideological things that you've probably seen on Twitter or heard
in the media about the positions that we take. But yeah,

(12:07):
we are, We're pretty different, I would say, right, And
so you, um, you met Palmer and you guys started
talking about war and defense, not not initially, So what
were those initial conversations? Like, yeah, founder Song was the
first institutional investor in Oculus, So we've known Palmer since
I think, you know, he was like a teenager at
that time. I mean, and by the way, let's be honest,

(12:28):
he's not that much older now. He's six now, son,
but but yeah, he's not that much older now. Yeah.
I met him, was totally blown away by Oculus, bought
one of the original dev kits. Have kind of enjoyed
interacting with him just because of his passion and creativity,
and had a couple of conversations with him about this

(12:50):
search that I was doing at Founder Sign So when
I first joined the fund, not knowing anything about venture capital,
not knowing anything about how the tech community works aside
from my one experience at Palenteer, I decided that I
wanted to go and find the next palent Air SpaceX So.
Founders Fund is a large investor in both of those companies,
and the theory was those success stories should breed other

(13:11):
success stories, And so I went out and looked at
every company I could find that was doing business or
interested in doing business with the government, and ended up
not really finding anything. And I was talking to Palmer
about this and saying, like, you know, it's pretty crazy
given that our adversaries Russia, China to a lesser extent
but still present, Iran, North Korea, we're doing things that

(13:33):
we're really challenging the strategic positioning of the United States
and our allies globally. And Palmer agreed and had a
ton of thoughts, as Palmer does about many things, and
I basically I had this conversation with him where I said,
you know, what the United States really needs is Stark
Industries from the Ironman movies. We need a company that

(13:55):
is going to be well capitalized that will build products
for the fence community, not requirements programs. We're not doing
a services contract on a cost bus basis, but building
products vertical and selling them directly to the defense community
to advance our ability to defend and deture. He got
really excited about that idea, and that was kind of

(14:17):
the early phase of him departing, as we said, from
Facebook and starting this company. And I read that there
was like a brainstorming session that happened. I think there
was Chick fil A, involved some crazy power point where
you guys are putting the future of warfare on there
and there's all sorts of different future of war slides.
Can you just take us to that what that looked like?

(14:39):
Take us to that brains I mean, because you're speaking
in like very above the whatever, like Silicon Valley terms, like,
just take us to what that brainstorming session looked like.
We're in Orange County because that's where Palmer lives. You
guys are all sitting around eating Chick fil A. I'm
from the South, so I can get on board with that.
You know, explain what that jam session looked like, what
this power point of the future that could be borderline dystopian.

(15:01):
If we're not careful, we'll get into that looks like
and like, what this whole thing happened? Just just go there.
Sure don't speak in Silicon Valley terms. That's what it
actually looked like. Yes, So it was at Palmer's house.
This would have been I think in April or May
of two thousand and seventeen. We had invited a bunch
of our smartest friends that we could find that could
be potentially interested in leaving their current jobs to come

(15:23):
and do this with us, and anyone interesting that we knew.
A lot of the early employees of Andrew. So we
had Matt Grimm, who is the co founder CEO, Joe
Chen who was I think one of the first employees
of Oculus, with Palmer and Palmer and I and Matt
Grimm have put together this deck that you're referencing that
was basically like what would a stable world in the

(15:47):
future look like if we were actually able to deter violence,
Like how do we stop bad things from happening and
do that in a way that preserves our leadership over time?
There was Chick fil A. I will say that I'm
allergic to chicken. Palmer knew that I was allergic to chicken,
and uh, Nicole Palmer's now wife then girlfriend went to

(16:08):
Taco Bell and bought an entire bag full of bean burritos,
and so everyone's eating chicken and I'm sitting there just
like crushing being burritos. Um. Yeah. Palmer is is a
an avid fast food aficionado, and so he knows all
the all the places to go and the things to do.
But that that was our dining of choice that day,

(16:30):
and we kind of just went through this whole plan,
like what would Stark Industries look like? I love that
keep going back to like iron Man. Yeah, you know
what I mean. It's like when we're the king about
the future of defense technology, how we're going to protect
ourselves from like Russia and China. Like it goes down
to like Chick fil A, bean burritos and Iron Man
and like a guy who likes to be barefoot. And
I don't mean this in like a snarky way, like

(16:50):
this is like what we're painting the picture of Its
kind of interesting, you know. I think I think this
is one of the things that most people don't realize
about the defense community over really like the last hundred years,
is that these moments, these kind of crucial moments in
our history have been defined by like founders, by entrepreneurs.
Maybe they worked for the government, maybe they didn't. But

(17:10):
you had Benny Shriver with I C b MS, You
had Kelly Johnson with the YouTube, the SR seventy one,
the F one O four, You had Admiral Rick Over
with the nuclear Navy UM, you had Howard Qughes and
all the things that he did UM in the aviation space.
And we've kind of gotten into this weird like quasi
communist state where like no one's actually responsible for anything,

(17:31):
Like there are no founders, it's just bureaucracy that's running
all of these multi, multi, multi hundreds of billion dollar
defense programs. And so I think the usage of Stark
Industries as the analogy is actually really powerful because it
says Tony Stark, the person, is actually the company. And
you see this tension that happens over the course of

(17:53):
all of the comic books in the movies where he
keeps trying to turn over responsibility for certain things, but
actually like it's him that makes this stuff work. And
I think Palmer has this powerful personality where he's able
to pull together really incredible people, really incredible engineers, work
through a vision and actually deliver the products that we

(18:14):
say we're going to deliver, right, Okay, So go back
to that day. So and so you guys had put
together this power point and what did it have on it?
It had all these different what the future of warfare
looked like? What did the future of warfare look like?
I think future of warfare is probably the wrong term,
because the idea is actually to prevent warfare, right, Like,
you want to deter violence, you don't want to have
more violence. Um. And so the power point was kind

(18:37):
of talking about what are the near term things that
we can do using existing technology, what are the medium
term things that we can work on that are probably
going to be possible in you know, a five to
ten year time horizon, and what are like the science
fiction ideas that would completely change the game and force
a paradigm shift in international affairs. And we just had

(18:58):
this kind of crazy brainstorming session where everyone was pitching
in their idea is saying actually that's not possible, or
actually that's more possible than we think that it is.
And we were trying to come to a rough agreement
on what the first couple of years of a company
would look like if we ended up starting it. And uh,
I think everyone left that day feeling not only really

(19:18):
excited about the vision and commitments in hand for many
of them, but also an idea of the types of
things we were going to be able to build and
a rough prioritization of what that would look like. And
and not to see the obvious though when we're talking
about the superhero um bit, but you are coming with
a founder who was pushed out of Facebook for controversial

(19:40):
comments he made in like all right, forms you have
Palent here, which you are a part of, which has
come under fire for surveillance issues and that kind of thing,
and is associated with Peter Tell, who's most certainly a
controversial figure. Nonetheless, um, so are you guys the superheroes
to do this? And how how did you prepare to
respond to people looking at this team and saying are

(20:01):
we going to trust this team to build and defend
our future? Yeah? I think everyone on the team has
wildly diverse ideological political beliefs. You know, Palmer's political beliefs
are not indicative of the rest of the founding team
nor the company. You know, the kind of the crazy
thing about this whole conversation is that, by US standards,

(20:23):
Palmer is pretty boring in his political beliefs. Like he's
a he's a libertarian, he believes in limiting government regulation,
he believes in you know, free market economy, and his
engagement in the political sphere is kind of limited to
pursuing those things. Um, I think he's gotten a lot
of really unfair press treatment that does not resemble reality.

(20:46):
And uh, you know, I think the company by and
large is very aware of that, from a founding team
to the employees, we're we're all very aware of that.
The important thing about the association with all of those
kind of different aspects that you just mentioned is that
we have open, harsh dialogue internally about everything about the

(21:06):
products that we work on, about the programs within the
governments that we work on, about the countries that we
work with. And that would only be possible if the
people involved, particularly in leadership, were open and receptive to
that rigorous debate. And I think this is something that
Palmer found at Facebook, is that that was not a
culture that was open to rigorous debate in fact, it

(21:29):
was only open to a single ideological bent that is
militarized in a way that you know, I think the
American people should all be pretty concerned about. I think
it's interesting. The last time I interviewed Palmer, UM, I
had just finished doing a series of Conservatives undercover in
Silicon Valley, and like literally interviewed conservatives who didn't feel
comfortable coming out and talking about being conservative Silicon Valley.

(21:51):
We had to do it in shadow. This is before
I mean, there's certainly a cultural war playing out, which
is a whole separate conversation, but we're seeing that on
a grand scale. Now we're going to take a quick
break to hear from our sponsors. But when we come back,
what exactly is Andrew building a look inside the technology? Also,
if you like what you're hearing, make sure you hit

(22:12):
subscribe to First Contact in your podcast app so you
don't miss another episode. I want to talk about Andrew

(22:33):
is just in general, like what you guys are doing.
So Palmer's like the product guy and he's like building
out crazy interesting technology. Um, and you're thinking, I think
a lot about these ethical issues and some of the
more philosophical issues as we kind of head into this
this future for folks who don't know Andrew. First of all,
isn't this the Lord of the Rings reference? If I

(22:54):
if I'm getting it correct, it is. Yeah, it's Elvish
and Lord of the Rings for Flame of the West. Okay, um,
reasoning behind that it seemed really appropriate, still seems really appropriate.
I mean, and Roll was the sword Narsal reforged. Narcel
was the sword that cut the one ring from the
hand of Sauron in the early ages, and it was

(23:16):
reforged as and Roll to be welded by Aergorn during
the Fellowship. So it has kind of a storied history,
and in the series, we're Tolkien fans seem to make
a lot of sense and so explain kind of the
premise of what you guys are building. Yeah. So again,
our kind of view of the future is that if
we're not very intentional in the West with building the

(23:40):
technology that's required to protect and preserve our values in
our way of life, that these technologies are going to
be built by our adversaries, and we shouldn't trick ourselves
into believing that our adversaries are in some way our
moral equals. They're not, and we want the best technology
that is going to deter COMF to be controlled and

(24:02):
operated by the people that have the value system that
we share. And so, you know, anything that we can
build that fulfills that mission while being very cognizant of
the ethical considerations, the very real, very meaningful ethical considerations
that are acquired as those things that are built is
something that we're really interested in working on. And so
I mean, in a most baseline thing, you guys are

(24:23):
building out consumer tech products, generally with artificial intelligence that
you sell to our allies in the government. Yeah, and
not everything is is consumer. In fact, there a lot
of the hardware that we integrated into our systems is
not intended for consumers. It's enterprise grade equipment. But yeah, basically,
the way that you can think about it is that
we're building both hardware and software to achieve mission ends

(24:47):
at the lowest cost possible for the taxpayer. And so
rather than building a bunch of bespoke technology on as
I said before, these cost plus contracts like the F
thirty five, the four class aircraft carrier, or even these
bespoke software programs um like the Defense Travel Service and
all sorts of stuff like that. We're trying to take
the best of class that exists off the shelf, integrated

(25:09):
with some things that we do have to build ourselves
because it's not available commercially, and then turn it into
a product that works today as well as we say
that it works and turn that over to the government
customers to use for their mission. Can you give us
a sense of the products you're building that the stuff
that's out to market now, and who you're working with. Yeah. Um,
So our first product is what we call Century Tower.

(25:30):
It's a thirty some foot tower that has integrated sensors,
so instead of security officers sitting in a dark room
with hundreds of little CCTV feeds, the tower is just
telling the operator when something is happening that they need
to look at. This is really critical for all sorts
of critical infrastructure, whether it's military races, oil and gas facilities, um,

(25:52):
national borders, whatever it might be. Uh. Second product that
we built is basically a tower that flies. We call
it ghosts a helicopter that has many of the same sensors.
It's fully autonomous, so it takes off, executes a mission,
returns to base, and provides that same level of autonomous
operation and computer vision, and what is that meant to

(26:12):
do and what where do you expect to see that deploy? Yeah,
the same thing as the tower, except it it moves.
And so a tower by its very nature has a
range that it can operate in. And if something is
moving outside of that range, or you expect that things
are moving it more velocity, you might want a helicopter
to be able to track and pursue. There are a

(26:33):
lot of remote operations, like special Forces units in the
field that might want forward notice of things that are
that they are going to be encountering on the road
ahead of them, and so there are a lot of
potential applications for for ghosts. The third product we call ANVIL,
which is a kinetic interceptor for unmindarial systems. So I'll

(26:53):
explain that. So most people, I'm assuming have probably heard
about the threat of drones in airspace, particularly in military
environments where you know isis other adversaries are taking standard
consumer drone technology like d g I madics and stuff
like that, putting grenades on them or just flying them
for surveillance operations, and this has become a terrible risk

(27:18):
to our service members abroad. Taking them down initially is
as easy as jamming their radio signal and forcing them
to return to where they came from, or denying GPS
and some environments things like that. But as the drones
become more and more autonomous where they don't really emit
any signals that you can jam, it becomes harder and
harder to remove them from your airspace. And so kind

(27:41):
of the crazy idea that that Palmer and the rest
of the team had was to use another drone, our
own drone with a terminal guidance system on it, to
lock on to the adversarial drone and fly into it
at high speed, kind of like a flying bowling ball
that's intended to knock them out of the sky. And
so all of these things kind of paired together, where
you have static passive surveillance from the towers, you have

(28:05):
a dynamic mobile surveillance capability from the helicopters. You have
the interceptor that can take out aerial platforms that are
approaching your facility or whatever it might be. Are like layers,
an overall platform that we call lattice, which is the
battlefield management, command and control software that sits behind all
of it, and fast forward from this conversation you guys

(28:25):
were having, you know, in Orange County and talking about
all this kind of stuff to now like who are
you doing business with? Department of Homeland Security? Like who
you know? Who are you guys selling your technology to. Yeah, Predictably,
our largest customers the U. S Department of Defense. UM
numerous branches, so we're working with special Forces, We're working
with the Marine Corps, We're working with the Air Force,

(28:45):
the Navy, So that represents the majority of our business.
We do also have some work with federal law enforcement
and the Department of Homeland Security, and then we're working
with some of our international partners, close allies like the
United Kingdom and very similar defense mission sets. The last
time I interviewed Palm, I must have been I would say,
like over a year ago. Um. The headline grabbing thing

(29:06):
I remember for you guys was the first product you
guys were building. It's like the virtual border wall. And
I think that's the thing that people looked at us,
are you building the digital wall for Trump? Because this
was the first thing you guys put out publicly. A
lot of people looked at Andrew and looked at what
you were doing. Um, and asked you some of these
questions about is this ethical what you're building and is
this the right thing? So I'd be curious to know

(29:28):
where that stands. Now. Yeah, the work that we're doing
with customs and border protection within the Department of Homeland
Security is not a policy initiative. We don't have policy
views on how border control should be managed. But we
do believe that within the constructs of a democratic government,

(29:50):
we should be supportive of the things that the government
is doing, the mission that they're executing. And I think
in this case, Democrats, Republicans, everyone kind of agrees that
you have to have some form of border security, just
like literally every other country in the world, and that
when you're making decisions about what policies you're going to

(30:10):
accept with regards to that mission set, you need to
have data. You need to know what's actually happening. Um,
you need to know what kind of people are coming
across the border. Is there actually the flow of drugs
and black market weapons that people think there are? Are
there unminerial systems that are posing a threat to our
civilian populations. You know, there's all these questions that you

(30:31):
want to have answers to and I think that kind
of an indefensible position would be, actually, we don't we
don't need information, we don't need any data. We don't
need to know what's happening. We should just let everything
happen that's happening. And so I think this is kind
of why we've gotten such bipartisan support for the technology,
is that we're not making a political assertion. How do

(30:52):
you think ethically about the data? So the data that
you guys gather, I'm sure you're gathering a lot of
this data right from everything it's picking up, and so
what's the standard there, No, we do not earn the data.
Data belongs to the customer. And by the way, that's
in every case, every use case, aside from our own
technology that we're using in test environments. So yeah, basically

(31:13):
they get the feeds off of the towers of the
helicopters or whatever else they're using. Human beings, actual people
who have a job in a mission. In that job
are making decisions about how they respond to that data. Um,
so if you see an unmandarial vehicle with drugs attached
to it, which is something that happens periodically, they can

(31:34):
decide are we going to pursue that. Are we going
to try to shoot it down? Are we going to
try to follow it until it runs out of battery?
Are we going to let it go? You know, these
are decisions that human beings are making at the end
of the day. So yes, absolutely there are ethical questions
that are involved with each of the products that we
build UM and there's a process by which I think
you go through thinking about how that mission is realized

(31:54):
and who are the people that are involved in making
that decision. So can you explain that he took a
lot about this just war theory You just explained to us, Like,
what's the gist war theory? Yeah, so just worth theory.
This is a mini centuries old framework for evaluating how
and when you go to war and how war is conducted.

(32:16):
And you know, the general concept is that there are
all of these principles that we can apply to any technology.
They apply the same to a knife and arrow of
bannet as they do to autonomous robots in battle. And
so I had wrote an essay that it seems like
you've probably read about defense ethics. So there are four

(32:36):
specific principles that I talked about in the essay. The
first is last resort, So the principle of war's last resort.
The second is discrimination, principles of discrimination and proportionality, and
the last is right intent or just authority. So this
is how you look at the ethical decisions that you
guys are making and the products you're building, and what
kind of framework you're going to think about when you're

(32:58):
when you're building things and deploying them. That's right. So oftentimes,
and even some of your previous guests, Professor surf at
On talk about brain modifications and things like that. He
talked about some of the ethical and regulatory structures that
should be put in place for this. People often talk
about how we need some sort of ethical framework for

(33:18):
evaluating autonomous systems, artificial intelligence, quantum computing, neuromodifications, things like that,
and my argument, the core argument, is that actually there
are frameworks that exist today. Just worth theory happens to
be a really good one for applying these principles, and
you just have to think critically about how those new
technologies fit into each of those paradigms. First of all,

(33:39):
a little Bertie told me that you do these dinners
at your place. I have a feeling that this theory.
You guys jam on this like maybe like these dinners
or some sort like can you give us like specifics
around the technology you're building now? And if you don't mind,
just like be as honest as possible, because I think
that you know, I understand that these are hot button

(34:00):
issues and that like you know that that there can
be headlines out of them and everything. But the stuff
you're dealing with is really fascinating, right, and the issues
that you're talking about, And I like what you said
about this idea of having rigorous debate and saying things
we can't say out loud and just kind of like
yelling at each other about theories and this and that.
So can you take us to this compound? It's like
two hours outside of l A, right or where where

(34:21):
exactly is Andrew It's like, depending on traffic, forty minutes
five hours. I don't know so much traffic there. I'm
just gonna go with like five hours, um. But but
take me to like how you apply this theory into
something really specific that you've worked on, because I can
imagine for you it's challenging and interesting and you really
are kind of thinking ethically about building out this technology that, um,

(34:43):
you guys are going to be deploying and selling to
governments and putting in the hands of other people and
seeing how it will be used. So, like, how has
this theory worked specifically, because when you talk about my
interview with Moron, like he talks about a lot of
this stuff in theory, you're actually doing a lot of
this stuff in reality. Yeah, that's a really good point.
This is something that has to be approached with a

(35:05):
high degree of seriousness because it does have real world implications.
This isn't a theoretical conversation around you know, some defense
technology that might exist in ten years. It's like, these
are the things that are being deployed today, not only
by Androll, but by a lot of other companies as well.
And so going to the technology that you guys are
building specifically, say like lattice, right, this this technology that's

(35:26):
already been deployed at the border. Right, take us through
the ethical framework. What did you decide not to do?
You know, where did you guys decide to draw the line?
I would say that deciding where to draw the line
implies that there was some sort of conversation about accelerating
beyond what we were building as a product, which wouldn't
be an honest description of what happened. You know, I

(35:48):
think you could extrapolate a million different things, ranging from
like some sort of laser in kind of a wonder
one way, or a nuclear weapon and everything in between
with literally any technology that you built, anything that has
a camera on it, you could say, like that could
be used to target for firing many nuclear warheads. I
don't know, it's like that. That was never part of

(36:08):
the conversation. The entire concept behind the Century Tower was
around data collection. That was it from the very beginning.
It was never about anything more than data collection. I
guess I look at like maybe the inevitable, like do
you have conversations about facial recognition or we wouldn't go
near that. I mean where, like where do you draw
the line ethically or even when we talk about ghosts

(36:29):
right that something else you guys are deploying, Like what
are the other ethical conversations you guys have about those
specific technologies? Yeah, no, that is a really good question.
And I think this is like one of these examples
where the values that we hold as a country are
wildly divergent from our adversaries, like facial recognition really really
good point. Is it necessary to the mission to be

(36:51):
able to at long range determine who people are algorithmically? No,
not really like what our military wants to do. What
our officers in the Department of Homeland Security want to
do is they want to know when something is happening.
They don't need information about who everyone is as identified

(37:12):
by some sort of by some sort of algorithm. This
is the thing that you know China is doing, and
this is the ideology and technology that they're exporting to
dozens and dozens of countries in partnership with companies like
hick Vision and Dawa and since Time and meg Vi
and Huawei. I mean, this is that is the Chinese

(37:32):
ideology that they're exporting. That's not something that we're interested
in doing. Does it worry that you build out this technology?
I'm sure you get this question a lot. You know,
you developed this technology that you believe you do it
in an ethical way, but that it could be used
in a way that is not ethical, That someone could
take what you're doing and then add facial recognition to
the technology that you've deployed. Yeah, I think this is

(37:53):
why it's really important when you're making sales of technology
and the defense space to consider the end user of
that technology and what processes exist to control that. One
of the really important things about this ethical conversation to
us at and Row in the context of the United
States is that, you know, we shouldn't take the democratic

(38:14):
process for granted. UM. We are blessed to be in
a place that has a system of checks and balances,
a very large, non political civil servant bureaucracy that exists
that makes these decisions, and we have an ability to
change policy at the ballot on a every two year basis,
basically at a federal level. And so we don't believe

(38:37):
that the quote enlightened technocrats in Silicon Valley, UM should
have the ability to decide for the American people what
technology our government should be using and what technology our
government shouldn't be using. You know, we believe people are
really smart and that the government has the ability to
make changes as needed as this tech is deployed. One

(38:58):
of the most interesting questions think when it comes to
the future of tech and war, UM, is this, I
guess I come back to this line. Maybe it's a
little dramatic, but it's like a I to kill or
not to kill? Right, like this idea that you guys
could be building out autonomous systems that can make the
decision to to kill um, will you do that? I mean,

(39:22):
our policy internally has always been that in almost every case,
a human in the loop makes a ton of sense.
There are certainly cases where they might not even involve
human casualties where you really can't have a human And
for example, if you have hypersonic missiles flying at you,
you have like a split second to make a decision

(39:42):
about whether or not you're going to shoot it down.
And these are the types of things again like iron Dome,
that have been driven by computers, And so there's this
constant conversation that seems to be happening about, well, in
what future world, like, will we just have to make
these decisions? Actually, like we've been doing this for over
a decade. There are computers that are making kinetic decisions

(40:03):
on on a regular basis today when it deals with
human life, I think it raises the stakes quite a
bit in the engagement of humans in the loop and
that decision making, which does feel really important. I think
one of the conversations that doesn't seem to get enough
airtime is the idea that you can't just wait for

(40:24):
all of the theory around the ethics to be worked
out before you build something, because our adversaries will build it.
And if we look back at history, you can see
that the welder of the technology, the person that builds
the technology and owns the technology, is really in control
of the standards and norms that are used to deploy
that technology. You can see this with the way that

(40:46):
China is approaching regulation around five G with the International
Telecommunications Union. They have a seat at the table. We
do not have a seat at the table. And if
you go into these conversations assuming that that we're going
to somehow be able to, you know, push our agenda,
I think you will find in history that has been
the wrong assumption. Another example of this is the Intermediate

(41:06):
Range Nuclear Forces Treaty i n F that we had
with the Soviet Union, where we agreed with the Soviet
Union that we wouldn't build intermediate range ballistic missiles. China
was never a party this treaty. They moved forward with
building intermediate range ballistic missiles. Then Russia, when they realized
that was happening, they also began building intermediate range ballistic missiles,
and so we put not only ourselves to the disadvantage,

(41:29):
but we put our service members in the Pacific at
risk for their lives because we were beholden to a
treaty that was not being followed, that was not being
taken seriously. And so whether or not we build AI
for defense, whether or not we build autonomous systems for defense,
whether or not we build better precision fires for defense,
whether or not we build quantum computers for defense, other

(41:51):
people are going to build these things. And we want
to be in a position where we have a seat
at the table talking about how those technologies are being used.
So take me to that seat at the table, as
you have a seat at the table you guys are
sitting there talking about these things. I remember the last
time I interviewed Palmer, like asking him that same question
about like will you deploy technology that can make the
decision to kill? And I remember I think I remember him,

(42:13):
you know, saying right now, no, but that doesn't mean
in the future we won't, you know. And I thought
that was really interesting because I do think it comes
with all these really interesting ethical issues of at what
point and who's coding the decisions it's making, and AI
is so flawed in general. But um, so have those
conversations moved forward with you guys? I mean, what when's

(42:34):
the last time you spoke about it or what was
the nature of it. I can't think of any specific
examples of tech that we're building right now where that
has been an issue. Um, but I think Palmer's answer
is correct. I mean, there are a lot of versions
of just war application that do involve lethality. It's very
very hard to predict the future to say, like what

(42:55):
the conflicts of tomorrow will be, and you know the
types of decisions technologists will have to make in order
to sustain an advantage in those conflicts. But to the
extent that the tech is deployed as a last resort,
to the extent that it is more discriminate, to the
extent that it is more proportional, to the extent that
it ensures last right intent and just authority, and it

(43:17):
ensures that human flourishing can continue in a more abundant way. Absolutely,
there are I'm sure there are applications of technology that
will have lethal intent that fulfill and check all those
boxes that said, I'm sure there are also technologies that
will not and those are the technologies that not only
would I not build, but I also would not invest

(43:38):
in them. We're talking about the future of autonomous weapons
and make aim making the decision to kill not to kill.
Are you worried that if it makes a decision and
a split second, like traditionally it's been a human making
that decision, and we can put that on somebody that
in the future that this if AI makes wrong decision,
that could the liability could fall on you, guys. I

(43:58):
think liability is a complaint issue with all technology, whether
it's you know, self driving cars in the consumer world
or you know military technology and the defense world. Of course,
like the liability needs to be worked out at some point,
whether it's through regulation, whether it's through some sort of
legislative action. I don't view that as something that would
deter me from wanting to work in this space, um,

(44:20):
because again I believe in the democratic process, and I
believe that there will be some sort of fair reckoning
um for these things. I think one of the things
that has kind of always been inspiring to me in
this is that science fiction has kind of thought through
a lot of these ethical challenges well ahead of its time,
like well, well well ahead of its time, and so

(44:40):
science fiction is an awesome place to go to start
talking through some of these really complex challenges, like, for example,
the prime directive in Star Trek, Like you know, Ellen's
over here talking about interstellar travel and things like that.
It's like Star Trek has worked through more academic research
on the impact of visiting neighboring civilizations than any like

(45:04):
university has. And I think these are the types of
things that we can and should be looking to fiction
to partially inform so that we can be more prepared
you know, at the eventual moment that those technologies come
to fruition, if they come to fruition at all. But
I think Star Trek is a great example of that.
We're going to take another quick break to hear from

(45:25):
our sponsors. But when we come back, a mishap with Lightsabers, yep,
you heard me correctly, Lightsabers. And if you have questions
about the show, comments, really anything, you can text me
on my new community number zero three four one zero.

(46:05):
So what what would you say, like as an ethicist
and someone who's kind of thinking about these things, And
you say, like let's go specifically to Andrew, like if
this is the kind of thing that could come down
the pipeline, Like, what would you want people to keep
in mind when thinking about deploying these systems with AI
that have the ability to make these decisions? Like what
is the conversation we should be having nationally, Like, because

(46:26):
you talk about the government, you know, you need to
regulate this, but oftentimes the government is you know this
probably better than anyone, like years behind the technology. So
what is the conversation that we should be having about
this kind of thing? Well, certainly within the government, the
d D, the Department of Defense has a very detailed
rule of engagement that they followed for a very very

(46:49):
long time. This goes back to, you know, the inclusion
of the Geneva Conventions and United Nations agreements about use
of force, and so I think these types of conversations
come naturally to the defense community. They know how to
think about it. In the tech community, I think it
comes way less naturally. And that's where I have to
engage more with people on it, because you know, these

(47:09):
are generally strongly held, very strong beliefs, with very little
data to back them up. People that say like in theory,
I would you know, I would never work on these things.
But I also haven't considered any of the implications that
lead to those decisions. Um And so it becomes more
of a conversation about presenting scenarios and saying what is
the most ethical way to move forward with this? This

(47:31):
is what the dinners at my house are about. These
are not I'm not hosting and ROW employees at my house.
That would be kind of a waste. Me an example,
I think, I guess that's what I'm interested in. Like,
give me an example of one of those like those
scenarios that you guys talk about. Sure, so let's imagine
that North Korea either has like humans or robots or
humans in robots like MEC warrior style UM, and they

(47:53):
just like flood into the demilitarized zone, just like thousands
and thousands of objects that are pushing forward. You have
the option of a taking a like serious kinetic one
too many action and you know, firing very large bombs, missiles, nukes,
whatever to eliminate them. Not knowing what's good what's bad,

(48:16):
or otherwise not knowing if there's like a zombie plague
that's like forcing everyone to flee the country um or
you can do some sort of AI assisted like auto turret,
so there's like, you know, I don't know, guns on
a swivel that you can kind of control and they
automatically lock on target. Or if there was an AI
that said differentiate between robots between people and people that

(48:39):
have weapons and only shoot people with weapons and robots
don't shoot any people that are running towards the border
without weapons. That's an AI driven technology, and there is
a lethal kill decision involved, but you could save thousands
and thousands and thousands of lives by executing that strategy. Instead,
a human could never make decisions that rapidly with that

(49:00):
much data flooding into the system. There's just no way
they could do that and deconflicting across all those targets
at the same time. I mean. And the idea is
that even when when humans do make these decisions, oftentimes
they are tired, fatigue, stress, and under all of these
and in these different situations of when to decide to
make the decision to kill or not to kill. I
think if you go and you talk to the soldiers

(49:21):
that served in in the last few international conflicts, the
decisions that torment them, that keep them awake at night,
are decisions that they had to make in the blink
of an eye. You know, a vehicle driving at high
speed towards one of their bases. You don't know if
that's a sick child and the father is just trying
to get them to medical care as quickly as possible,

(49:42):
or you know, a car full of explosive material that's
going to run into your base and kill service members,
and they have to make these split second decisions about
what to do. They want more information, they want to
be able to make better triage decisions, and by withholding
that technology for them, we're putting people's lives at risk,
both service members as well as civilians. I only play
the Devil's Advocate on AI because then I think about

(50:04):
how flawed AI can be, how biased it can be,
and how sometimes the algorithm makes these decisions and someone's
on the other side of it and you're like, wait
a second, how did that happen? And you and you
question that as well. So I think they're they're equally
very different issues, but issues when it comes to AI
making these decisions, and I think, no doubt the future
will be autonomous systems that are AI driven when it

(50:26):
comes to the future of war. So like that seems
complicated to me as well, since AI is biased. Yeah,
there's bias in both directions. For sure. Human beings have
biases that they don't even realize they have that cause
them to make decisions. Computers have different sets of biases.
To the extent that we can understand the way that
these models are working, we can correct a lot of

(50:46):
those over time. I don't think there has ever in
the history of technology been a something that was perfect
at the outset, Like, there's always room to improve. There
are things that we can do to make the models
more accurate, reduce the bias that's employ sit in them,
and I think that is important work. But like to
draw a line for me, I mean just I'm just
asking for one line, you know, like just give me

(51:07):
something a certain type of defense technology that you will
not build, Like just like what is your nose? Like
what is your like someone says something, You're like not that.
I actually think these are really easy and there's a
bunch of them. Like one example would be some sort
of like bio plague that you can release into an
enemy territory. Like coronavirus, Like, pretty sure that's ethically bad

(51:32):
if if that was an adversary launching that attack. Anything
that is disproportionately affecting non combatants is to be avoided
in my mind, So I think that's one. Another is
technology that conceals information rather than making it more accessible
to decision makers. You can look at things like the
Great Firewall in China or El PACTE and Cuba, but

(51:56):
blocking on information, or the denial of Russia for its
incursions into the Ukraine and in Georgia. I think these
are all like examples of places where the government has
made a concerted effort to hide information from the population
to defend potentially a crazy decision that they might make
in the future that's super bad and not something that

(52:16):
I would want to be involved in ethically. Has there
been something at Andrew that kind of got left on
the cutting room table, like you just decided we're not
we're not building this none for ethical reasons, because we
haven't really like edged into any of these crazy territories.
Yet there are some that have you know, been left
on the cutting table because they didn't work like we
thought that they should and weren't worth pursuing We've had

(52:38):
some some crazy ideas around, like real life lightsabers and
stuff like that, and it turns out that some of
these things, like the science just isn't ready yet. By
the way, the same can be said for augmented reality.
For decades, soldiers have been really interested in a heads
up display that they can wear in combat that gives
them information on mission objectives, on where only actors are

(53:01):
to prevent them from getting involved in findly fire accidents, um,
so on and so forth. Map overlays. Remember Palmer saying
that to me, and I thought that was really interesting.
The last time we did an interview, Who's saying the
future almost feels like a bit of a video game, right,
like and he and he has the video gaming background
from Oculus, but like, you know, you have this almost
virtual or augmented reality layer that shows you everything around you.

(53:22):
That's right. Now. You've just added like the idea of
a lightsaber too, So my mind is blown a little bit.
I think we can probably press pause in the lightsaber, don't.
I don't think that's how far did you get? It
got far enough that Palmer accidentally cut himself and I
had to get medical care so that is that's a
question for Palmer. I don't I don't understand the exact
physics of what happened, but needless to say, we were

(53:44):
no longer building it. By the way, that would have
been to use for breaching, so instead of putting C
four on doors, using a plasma cutter essentially to get
into denied environments. Oh man, so not like actually not
hand to hand combat, just just ensuring and uh so
let's see, where's the question. I know it's pretty far.

(54:06):
It's pretty hard to come back from that. Yeah, we've
just like gone down this path. Ye should we go
for augmented reality? And so this has been a dream
for soldiers for a really long time, and I think
it would be incredible, Like there are all sorts of
like ethical goods that can come out of this. Primarily,
I think most people would be shocked by how many
casualties that are caused in theater are friendly fire related,

(54:29):
Like it's just deconflicting. All of the stuff that's going
on is really hard, uh and will only become harder
as there are more autonomous assets deployed. And uh, obviously
our team is the best in the world for building
heads up displays for defense purposes. There's I don't think
there's anyone in the world that has more talent than

(54:49):
we do around this specific topic. I mean, if anyone's
going to build out something like this, it could be
the Oculus team, right, team that has built Oculus. Yeah. No,
I mean there's there's no better team for doing And
our decision was that it's just not there yet. The
science is not there yet to make it credible. In deployment.
There's a d D contract called I v S i

(55:10):
v A S and Microsoft won the contract to deploy
hollow lens. But it's like super test and prototype driven obviously,
Like it's hard to imagine a soldier walking into combat
with like a you know, really heavy, blocky hollow lens
hanging over their head with a really tiny field of view.
And so I think that there's some like test and
evaluation that's happening to figure out what the future might

(55:33):
look like. But you know, we'll be there and ready
when the science is ready to make it possible. And
the software that we're building this lattice platform is at
the end of it. It is the back end for
feeding all of that data to the operator in the field,
whether it's on their mobile device, you know, over radio
communication or eventually in an augmented reality headset. And have

(55:54):
you had any qualms of working with the current administration?
I mean, I know you are part of the transition
team for Trump and helping with Defense and giving recommendations
on what they should do. Have you had any hesitations?
And I know that you guys aren't political. You say
you're not political, But do you take any of that
into account to an extent, I mean beyond the fact
that yes, Silicon Valley is notoriously liberal. But but I

(56:16):
mean even for you personally, now that we're in an
enclosed room, um, and you know we have time so
you can't really get out, I mean, do you think
about any of this stuff personally, like what you're building,
who you're building it for, where it's going to go,
where it's going to end up? Talk to me. Are
you implying that the door is locked and I'm not
allowed to leave? Did I do that? I don't think, honestly,

(56:39):
I don't think it's locked. The reality is, like I
work at the intersection of tech, finance, and government, which
are like the three most hated industries in America. So
I'm not shying away from conflict or controversy. The way
to state are engagement with the government is that we're
working with the United States government. Administrations come and go,
some of them ester than others, and certainly they have

(57:02):
influence in things like budget policy, execution, presidential authorizations like there.
There are things that presidential administrations are responsible for, but
by and large, the enormous population of civil servants are
the ones that are responsible for executing the mission that
they've been given to execute. And we are not going

(57:23):
to withhold technology from the people that are responsible for
this mission because of who's in the White House. But
you are talking about a White House that has come
under controversy for an immigration ban, and these are technologies
that you're building that directly could impact in that kind
of way. So I mean, does that ever come into
account for you? Does that ever plan to into how

(57:44):
you feel about it? I mean, I think this is
again the beauty of the democratic process is that if
there were some sort of policy that we had a
strong disagreement with internally, unanimously or at least unanimous across
the leadership of the company, would we push back, can
say like we're shutting the business down and not selling
to the government because you know, we're afraid of where

(58:05):
it's going. I can't say that it would be impossible
that that would happen, because you know, obviously crazy things
have happened to nation states and distress over the history
of the world. I don't think we're there. I don't
think we're like on the precipice of some sort of
revolution that's going to lead to some terrifying world war
of significant ethical proportion. So in that regard, like the

(58:28):
decision really comes about, you know, do you trust the
democratic process? Do you trust that if a policy oversteps
that there are ways that that policy can be corrected
through the legislative process to the judicial process. And right
now I think the answer is unequivocally yes. We have
a democratic controlled House of Representatives. They have the ability
to legislate, the judicial branches you know, functioning in America,

(58:50):
where like the constitutionality of the policies that are laid
out by the executive and legislative branches constantly being evaluated.
And as long as that's the case, I think, you know,
it is the right moral, ethical thing to do to
continue working with the government of the United States of America.
I didn't mean to Donald start laughing, but my follow
up was going to be, would you deploy lightsabers to

(59:11):
this White House? Um? I mean we could talk about
the like ideology of the last three Star Wars movies,
because I think Luke Skywalker in the last Jedi would
certainly say that the answer should be no, because Jedi
can't be trusted and they should be eradicated from from
the Earth because it's too much power. Um. But I

(59:33):
don't think we're talking about that. And I also think J. J.
Abrams clearly disagrees and made that very clear and Rise
of Skywalker, which as a Star Wars fan, I really
appreciate it. Um. Okay, I want to know. UM. Sorry,
I totally derailed that. Um. What is it like being
involved in a business that a lot of folks in
Silicon Valley kind of disagree with? Because you're looking at

(59:53):
I think it was you have Amazon employee sending a
letter to Jeff Bezos saying, you know, stop selling facial
recognition and services to the U. S. Government. You had
I think four thousand Google employees for Project May even
protesting this contract work with the government. This is an
issue that's very sensitive of tech companies doing business with
the government, especially this current administration. Do you feel that

(01:00:15):
at Andrew that it's not not a popular thing You're
doing Two answers to the question. First is external to
the company, and the second one, I think is internal
to the company. External the company first and foremost, it
is incredibly important that I am available, accessible and having
serious dialogue and conversation around what I believe are some

(01:00:37):
of the world's most important questions around the ethics of
defense technology. So and this goes for all of the
co founders of the company and Palmer included, and that
that's basically like, are we going to be dismissive of
people's concerns? No, we should not be dismissive of people's concerns.
Should we be involved in the dialogue around how these
technologies are built, how the regulations are formed. Absolutely, we

(01:00:59):
should be involved in that. And I think that's kind
of key to you know, having this conversation with you, Lori,
and to hosting dinners at my house and going down
to Washington, d C. Which I'm doing later this afternoon
to engage in conversations with policymakers. It is really important
to think through all of the implications of these decisions
and make the right ethical decisions to the extent possible.

(01:01:21):
In every case internal to the company, I think is
slightly different. So you know, you could see Brad Smith
at Microsoft, Jeff Bezos at Amazon came out and took
strong stances of support for the US government, for the
Department of Defense. Google kind of went in a different direction,
and I think has started backtracking a little bit on
those decisions, um based on just a realization of the

(01:01:43):
control of the business that the activist had internally. But
at a rule, you know, we're in a unique position
because unlike Google, unlike Amazon, unlike Microsoft, our employees signed
up to work on defense technology. And so when you
read those letters that were written at other tech companies,
the key point of contention was that they didn't quote
sign up to work on defense. Well, at and Row,

(01:02:05):
they quote signed up for defense. They knew that from
day one, and so regardless of political persuasion, you know,
whether they're socialists or libertarians or big art Republicans or
little are it doesn't matter. Everyone at the company is
bought into the mission. And so when we have these discussions,
we have ethical discussions. We have mission discussions, but it

(01:02:26):
doesn't come down to the nature of the company or
the vision for the future that we have. May I
make a suggestion, Okay, it'd be interesting because you guys
are having all these ethical conversations about the future. Having
covered Silicon Valley all these years. It's a lot of dudes, right,
And I don't know about these dinners, and I don't
know about you know what, it was like a palmer

(01:02:47):
when you guys are talking about the future. But whether
it's women or diversity of mindset, you know, that's like
a killer app right, It's it's like thinking about empathetic mediums,
thinking about the human element in ways that I don't
think a bunch of the same type of folks will
be able to think about. So I don't know what
the breakdown is and I don't want to judge it
of Andrell, but I hope you guys have a diversity
of thought in the in the room when you're thinking

(01:03:08):
about these types of things, just because I do think
you guys are very you have a big play at
the future, and so do you guys have diversity of
thought in the room? Well, I mean diversity of thought
absolutely other forms of diversity in the form diversity of
thought that translates into the diversity color. You know all
of that. UM, you know, we this is something that

(01:03:28):
we work really hard at every day. But don't give
me the Silicon Valley line on it, like you know what.
But yeah, I know everybody in Silicon Valley is working
really hard to change it. Blah blah blah, and it
hasn't really changed. I agree, we and we we do
have people that are critical to our mission internally that
do not look like standard Silicon Valley people of whether
it's skin color or gender or whatever it might be. UM,

(01:03:49):
but of course not enough, and that's something that we
are working hard at. Which is I know, as you said,
the Silicon Valley answer. The other thing that I would
I would say here is that oftentimes people say that
in the Defense commune to d UM, it's like it's
more unbalanced than in other industries. I think that is
partly true, but there are some really really credible UH

(01:04:09):
minority populations that already exists within the Defense that I
think is as always very important for us to engage with. UM.
You know, under Secretary Ellen Lord is the head of
Acquisitions and Sustainment at the Pentagon. She's, you know, one
of the top four people in the department UH and
we engage with her frequently. UM. UH. There are startup

(01:04:30):
founders like Rachel Olney at Geocite and Danielle Perdomo at
Goatana that are active in the defense community. There are
academics like Renee Derrista at Stanford UH that are that
are really strong. UM. In the journalism space, some of
the strongest defense reporters are are female. I think, UH,
Laura ce Liegman at Foreign Policy is incredibly strong in defense,

(01:04:52):
knows what she's talking about. Morden Brennan at CNBC is
really strong. And so you know that this like rejection
or some sort of suggestion that that doesn't exist. I
think it is just false and that should be tapped
into for sure. I'd be curious to know when we
look at Russia and China, as someone who's been deeply
involved and looks at these types of threats, what's like

(01:05:12):
the scariest thing that Russia and China are doing that
you're a little bit concerned that in the United States
we're not going to be able to keep up with
or we have to keep an eye out on UH.
By the very nature of the conversation, the scariest things
are the ones that we don't know about. That's the
first thing that I would say. The second is is
that China has an actual strategy around exporting ideology and

(01:05:35):
technology in tandem with one another. And so you know,
if you look at the deployment of or the export
of surveillance technologies globally, as I mentioned before, you have
these companies like Hick Vision and Dawa that are camera companies,
Chinese camera companies. You have I Fly Tech d j
I that are drone companies. UM, you have AI companies
like meg v and since time, since time, it's the

(01:05:56):
most valuable AI company in the entire world. That's interesting
too to note. UM. And then you have all of
the networking and communications infrastructure behind Huawei. And when China
is going out to countries that you know it's engaging
with for Belton Road or engaging with for loan programs,
they're coming in and they're saying Venezuela, Ecuador, Egypt, Iran,
so on and so forth. Here is not only the

(01:06:18):
technology but also the ideology that wraps up the surveillance
state that we've built in China, and that we've used
to systematically oppress entire people groups using the technology that's
being built by our tech community. That type of stuff
is really scary to me, not only because you know,
it's being exported at a rate that is really unbelievable

(01:06:39):
in the last five to ten years, but also because
this is not the engagement in the US. Our Defense
Department does not work closely with the tech community, and
so post Cold War, our best and brightest engineers kind
of stopped going to the d D directly or to
the defense primes like Lockheed raytheon General Dynamics, so and

(01:06:59):
in so forth, and they started going in optimizing ads
at Google and Facebook. You know, that's where the bulk
of computer science talent is in the United States, and
so our exports. If you look at the exports of
United States technology internationally, particularly related to the military um
like form military financing for military sales facilitated by the
Department of State, it's like F thirty five's it's AmAm missiles.

(01:07:24):
It's all all this really high end military equipment that's
built by primes, and it's not the like low level
stuff that actually runs the countries that we're that we're
sending it to UM and that that's really concerning to me. Yeah,
I saw you. You wrote UM in an op ed
like Putin in September seventeen said that AI leadership was
a means to become the ruler of the world. China

(01:07:45):
has vowed to achieve AI dominance by Meanwhile, much of
the top AI talent and the United States is working
on things such as AD optimization, So that definitely seems
like it's something you think a lot about. Yeah. Absolutely,
you know, some of these countries, for what it's worth, China, Russia,
they actually have the ability to conscript their top talent
into working on national priorities. I'm not suggesting, by the way,

(01:08:07):
that we should conscript talent in the United States. I'm
just stating that it is an obvious advantage that they have,
which means that we need to figure out ways to
appeal to the talent that we need domestically to get
them to work on those national priorities. But how do
you do that? Because there's been so much tension between
Silicon valleying the government for a while, and the government

(01:08:28):
certainly has a reputation for moving slow and not really
getting things done, and this current administration isn't exactly some
place that a lot of liberal leaning Silicon Valley wants
to be a part of, so it certainly creates a
a conundrum of sorts. You know, I think there are
a lot of people in the tech community that not
only would be open to working on national priorities and defense,

(01:08:51):
but would be excited to work in national priorities for defense.
They just need a place to do it. And we
have a model for this. If you look back over
the last thirty years since the end of the Cold War,
there are two unfortunately only two, but there are two
multibillion dollar venture backed companies that have done the majority
of the work with the government, Palentteer and SpaceX. Palenteer
and SpaceX have been able to recruit the people they

(01:09:12):
needed to execute their mission. Like they SpaceX has a
collection of the best rocket scientists in the world working
on the Falcon platforms. You know. Palentteer has access to
the best you know, data engineers in the world to
build um Palnter Gotham and foundry products and roll. You know,
recruiting hasn't been the problem. It hasn't been one of

(01:09:34):
the core problems like we're able to recruit and retain
top talent um. The problem is in the government working
with those companies. And so if you look at Paloteer
and SpaceX, it took them such a long time to
actually crack into the government industrial base that they literally
needed to have billionaire co founders, like they needed to
have a billionaire working at the company to make sure

(01:09:56):
that the business was able to be financed through meaning
a contract volume. You know, this is something that we
realized at the onset when we first started and Row.
You know, you can debate whether or not Palmer as
a billionaire, but I think having the ability to raise
capital at good terms, at reasonable terms for capital intensive
business as well as potentially financed the business through the

(01:10:19):
slowness of government is incredibly important. And right now because
of this ecosystem that's been created around the defense primes
in the military, you really have to have this kind
of Howard Hues style of entrepreneurship where like only the
the you know, ragtag bunch of billionaires are actually able
to build a business, and that needs to be fixed.

(01:10:39):
We need to get to the point where the government
is willing and able to deploy meaningful contracts to companies
that are working on things that are important to them,
um rather than just like writing a bunch of little
two grants, which is their strategy today. Can you give
us like a very visual like, I want this very
specific if you don't mind, Like, what does the few

(01:11:00):
ure of warfare look like? I mean, you're in the
center of it. You can throw in some of your
technologies you're building, some of the ones coming down the
pipeline that you're not talking about yet, you can throw
in those. Look, what does the future of warfare look like?
How do we battle this out? How do we defend ourselves?
What are soldiers using? Just like take us there. We've
talked about some of these already. I think some of
it is real time situational awareness of the entire battlefield,

(01:11:22):
um so, knowing everything that's going on in an environment.
So these augmented spaces, uh yeah, augmented for soldiers on
the ground to the extent that we require soldiers on
the ground. So I expect that that number will go
down significantly. But if you, if any of the listeners
have read the book or seeing the movie enders game,
you kind of know this concept of basically putting yourself

(01:11:44):
in virtual reality and then kind of having a top
down view of a battle space and be able to
manipulate the assets that exist in that space. That type
of interaction is very likely going to be more and
more common. The Air Force is starting to play around
with this in a program that they're calling the Advanced
Battle Management System, and I think that over the next
ten years that's going to become super commonplace. So there's

(01:12:06):
that I think, you know, as far as engagement, like kinnectic,
engagement with the enemy will be much more driven by
autonomous systems, whether they're remotely controlled or fully autonomous, so
they we're not putting humans in harm's way where they're
not required. Another great example of this is uh there's
a venture backed company called shield Ai that has built

(01:12:27):
a small drone amendarial system that can be tossed into
a building and it will do a survey of the
building and allow the operators, the special forces guys whatever
that are about to kick down the door to know
what's behind the door. This type of information collection that
will say people's lives, I think will become more and
more pervasive across all of the different battle fields. But

(01:12:49):
really like The summarization of all of these categories is
that I think, if we can stay ahead, the future
of warfare is no warfare. And that is the intent,
is that you get to a place where your information dominance,
your battlefield dominance, your weapons platform dominance are all so
so real and so large that the gap is insurmountable

(01:13:11):
and the enemy won't want to engage in combat because
they know that they'll lose. And you know, there's all
sorts of like crazy science fiction versions of this um.
One of my favorite science fiction authors is this guy
named Vern Revenge, and he has a series called the
Peace War Series, and in it he talks about this
like force field that he calls a bobble and the

(01:13:32):
inside the bobble time is frozen and it's impenetrable. And
so if you had two enemies that we're fighting one another,
you just bobble one of them and then you go
to the other one and say, if you don't stop,
we're going to bobble you. And then you bobble them,
and you unbobble the other one, and you say, I'm
gonna say the same thing to you. If you don't stop,
we're gonna bobble you for ten thousand years, and then
you unbobble them, and you say, like it's your choice.

(01:13:54):
You can engage in combat or you can be bobbled
for ten thousand years. And basically the moralist is that
conflict basically just stops because people realize that the cost
of engaging in conflict is way too high. And I'm
not saying that we're like building a bubble, or that
we're anywhere near building a bubble, but I think every
piece of technology that you build that continues to build
upon your advantage gives us the ability to control to

(01:14:17):
some extent the amount of conflict that's happening globally. Could
the future of war be fought solely by artificial intelligence?
I mean, I don't think so. This conversation is so
far out there that it's always so hard to engage
in like a credible way. Um. You know, there's movies
like War Game where the computer decides that the fate

(01:14:38):
of humanity will be better off of it, just like
nukes everything to oblivion. You know. I think that humans
are responsible for making human ethical decisions. Uh, And it
will be that way for a very very long time,
and to the extent that computers are responsible for making decisions.
We should be working on ways to counter that threat,

(01:15:00):
to prevent that from becoming the way kind of like
the standard for how conflict is is managed. I only
push back on that to say, well, it's isn't it
Yale's jobs to think kind of far into the future,
Because if the problem in technology is that it seems
as though some of the folks in Silicon Valley haven't
thought far enough into the future and we see all
these human problems that have arisen. Yeah, I would say

(01:15:20):
that there's some of this that is academic and true,
like we should be having academic conversations about these things.
But this is not how conflict has been managed over
the course of history. Like we didn't have a detailed
discussion about the atom bomb, uh, income up with like
ethical frameworks for how we think about it, and then
like only then after we've like perfected the ethical framework,

(01:15:43):
decided to build it. Same thing with like kim bio,
same thing with precision guided weapons, same thing with cyber
Like these things get litigated, regulatory lee, litigated by the
people that hold the technology. If we sit back and
just say, like we're going to spend the next thirty
years like having a bunch of like fireside chats at
conferences with a bunch of academics about what each of

(01:16:06):
these defense technologies could mean. And we don't build it.
I guess who's going to build it? All of the
other countries that are not having that academic dialogue, and
we'll be sitting on our hands when they have a
critical national security advantage against US that puts our own
lives at risk. That seems like a really bad trade off.
Speaking of all of the other countries, who will you

(01:16:26):
not do business with? Yeah, I mean this is a
it's a it's a great question, um, and you know it.
It's case dependent, it's process dependent, it's governmental system dependent.
Certainly will work with the close allies, and so the
five by community Australia, UK, Canada, New Zealand. There are
no questions about our close allies, but we have to

(01:16:49):
have rigorous conversation about really anyone else. So no China,
No China. Although I think the only reason China would
want our technology at this point is to steal the
I P and and develop it for their own um.
I don't think that they would actually be interested in
being like a paying customer. Have you turned anyone down? Uh?

(01:17:09):
You know, we have so much inbound interest right now
and what we're building that you know we turned down
of what comes into our into our funnel. But have
you turned anyone down for those types of reasons? Like
a an ethical reason? We have decided not to follow
up with people because we thought that the use case

(01:17:29):
violated some ethical principle. I wouldn't I wouldn't want to
like start throwing people under the bus who I never
even responded to an email for that seems that seems
cold hearted and unnecessary. Fair. Um, Lately, I've become obsessed
with this idea of spies and this idea that you know,
there's so much valuable IP and Silicon value. You guys
aren't Silicon Valley based, but I'm assuming you know you

(01:17:51):
guys are a valuable company. I'm assuming you guys do
background checks and all sorts of stuff. But do you
ever think about, um, the employees you guys hire, or
even in Silicon Valley as a ole, worried about people
kind of infiltrating these companies. I know it's it's kind
of it's not necessarily and real focus, but just in
general having your experience in the government and now in
Silicon value you are a palateer. Do you ever worry

(01:18:11):
about um nation states kind of infiltrating these companies for
valuable data? Of course, Yeah, I mean I think our adversaries,
particularly China, has made no secret of its interest in
disrupting our defense industrial base and stealing intellectual property to
the extent possible. The similarities to the F thirty five

(01:18:33):
of their fifth generation fighter are striking, Like it seems
to me like they're actually doing a pretty good job
at stealing I P when they when they want it.
This is a huge concern. I mean, if you look
at like the impact of the tariffs that have been
recently implemented, compare that to the cost to the American
economy for IP theft from China, and it's like there's
not even a comparison, Like they're just just like ripping

(01:18:55):
so much, and so yeah, of course, Like it would
be crazy to not ass him that they're trying to
get at the personal data of the people that are
working on these top priorities, as well as information proprietarity
to the companies that are working on these priorities. Have
you had discussions about it at Andrew. Oh sure, I
mean information security is a critical piece of the pie

(01:19:16):
for everyone working in the defense industrial base. UM, we
have a crack team of infosec professionals that spend their
entire day thinking about how to lock down the edges
of the network, how to think through insider threat. You
know that it's a core competency that I think you
have to have. And I know that for the border
control tech, you guys actually went out there. I don't

(01:19:36):
know if you went out there, but I know Palmer
went out there and actually like was looking at this technology,
deploying this technology, playing around with it. You know you're
out um on the border. I'm sure they're certain dangers.
Have you ever worried about your own safety? UM, I
wouldn't say that I've worried about my like physical safety.
I've definitely spent a lot of time thinking about my

(01:19:56):
digital footprint and making sure that, uh, I'm not presenting
myself for my family, uh to undoe risk. And so
there's always this like kind of digital hygiene exercise that
you can go through to try to protect against that
last question, why do you do this? It's a sense
of duty to be honest. I mean it would be

(01:20:20):
crazy to make it about some sort of like sacrifice,
because I think, you know, being an entrepreneur is a
ton of fun, and I think if we're successful, I
think there's financial reward for our employees, for our investors,
certainly for the founding team. So you know, I'm not
going to act like a martyr. But at the same time, like,
these are really hard problems and we're not building an

(01:20:43):
app to share forty characters in a slightly better way
with our friends. Like this isn't popular, and I think
we have to unify around the idea that it's really important.
And going back to that September eleven, two thousand and one,
sitting in my principal's office, I knew it that moment
that this was going to be the career that I
was going to work on. I didn't know that this

(01:21:03):
is what I would be doing specifically, um, but I
can't imagine, you know, going to work every day and
not thinking about how I can be helpful to our
national security, to the priorities that are set forward and
the values that our natural security stands for. Why did
you know that that that was going to be what
you're gonna do the rest of your life. I think,

(01:21:25):
you know, part of the lie that's being told to
the world, particularly by the kind of modern culture, whether
it's like millennial gen z, whatever, is that there's absolute
equivalence in like all things like morality, culture or whatever.
And I think it events like nine eleven kind of

(01:21:47):
stuck with me as this realization that there's a real
world out there, and like we can't just hide in
this little bubble of comfort and say, like, actually everyone's
the same, everyone believes the same things, who values the
same things? Because I just fundamentally think that there's something
about the democracy, there's something about capitalism, there's something about

(01:22:11):
the freedom that were that were afforded that's worth defending,
and without that, you end up living in these authoritarian,
oppressive societies where none of those values can be exercised.
And you know, I don't really care if I can
open Twitter on my phone and you know, yeap at

(01:22:32):
people about the political issue du jour, but I do
care a lot about my ability to exercise freedom of speech,
and that's something that is not protected in many places
in the world. So this episode had a lot to
take in. I'm guessing you guys might have some thoughts.

(01:22:55):
What do you think of Andrew and the technology they're building?
Where should we draw the line? Text me on my
new community number five zero three zero. It goes directly
to my phone. I promise, I'm not just saying that.
And here's a personal request. If you like the show,
I want to hear from you, leave us a review

(01:23:15):
on the Apple podcast app or wherever you listen, and
don't forget to subscribe so you don't miss an episode.
It helps us out a lot and follow me. I'm
out Loriie Siegel on Twitter and Instagram and the show
is at First Contact Podcast on Instagram. On Twitter, We're
at First Contact Pod. First Contact is a production of
Dot dot Dot Media, Executive produced by Lorie Siegel and

(01:23:35):
Derek Dodge. This episode was produced and edited by Sabine
Jansen and Jack Reagan. Original theme music by Zander Singh.
First Contact with Lori Siegel as a production of Dot
dot dot Media and I Heart Radio. First Contact with
Lori Siegel is a production of Dot dot Dot Media

(01:23:56):
and I Heart Radio.
Advertise With Us

Popular Podcasts

1. The Podium

1. The Podium

The Podium: An NBC Olympic and Paralympic podcast. Join us for insider coverage during the intense competition at the 2024 Paris Olympic and Paralympic Games. In the run-up to the Opening Ceremony, we’ll bring you deep into the stories and events that have you know and those you'll be hard-pressed to forget.

2. In The Village

2. In The Village

In The Village will take you into the most exclusive areas of the 2024 Paris Olympic Games to explore the daily life of athletes, complete with all the funny, mundane and unexpected things you learn off the field of play. Join Elizabeth Beisel as she sits down with Olympians each day in Paris.

3. iHeartOlympics: The Latest

3. iHeartOlympics: The Latest

Listen to the latest news from the 2024 Olympics.

Music, radio and podcasts, all free. Listen online or download the iHeart App.

Connect

© 2024 iHeartMedia, Inc.