All Episodes

February 24, 2020 83 mins

Should Artificial Intelligence be able to make the decision to take a human life? And if it does, who will be liable if — or when — it goes wrong? When it comes to the future of war and technology, the ethics are murky.


Tech is creating a new arms race. Will the U.S. be able to keep up with the likes of China and Russia? And what ethical lines will we draw, or cross, to maintain our national defenses?


Let’s rewind to Orange County circa 2017: A handful of entrepreneurs — eating Chick-fil-A and Taco Bell — sat around a table exploring the idea that what the United States needs is a real-life version of Stark Industries. Yes... from Iron Man. That brainstorming session led to Anduril — a defense technology firm that’s since become a billion dollar company at the center of the debate around the future of war.


Laurie Segall sat down with Anduril’s co-founder, Trae Stephens, who spends a lot of time thinking about the philosophy of war and how technology is transforming it. In this episode of First Contact, we explore a framework for redefining war — where the front lines of futuristic battlefields are blurred, and technology is leading the charge. Expect rigorous debate. Unpopular viewpoints. And uncomfortable scenarios.

Learn more about your ad-choices at https://www.iheartpodcastnetwork.com

See omnystudio.com/listener for privacy information.

Listen
Mark as Played
Transcript

Episode Transcript

Available transcripts are automatically generated. Complete accuracy is not guaranteed.
Speaker 1 (00:00):
First Contact with Lori Siegel is a production of Dot
Dot Dot Media and iHeartRadio. I guess I come back
to this line. Maybe it's a little dramatic, but it's
like AI to kill or not to kill? Right? Like
this idea that you guys could be building out autonomous
systems that can make the decision to kill? Will you

(00:24):
do that?

Speaker 2 (00:25):
It's very, very hard to predict the future, but to
the extent that the tech is deployed as the last
resort and it ensures that human flourishing can continue in
a more abundant way.

Speaker 1 (00:37):
Absolutely, to kill or not to kill? Should artificial intelligence
have the power to make that decision? And if it does,
will be liable if or when it all goes wrong?
Tech is creating a new arms race. Will the United

(00:57):
States be able to keep up with the likes of Russia?
And and what ethical lines will we draw or cross
to maintain our national defenses When it comes to our
future the ethics of war and technology are murky. Now,
with that in mind, I want to take you to
Orange County. Picture a handful of entrepreneurs, some of them

(01:19):
are controversial, sitting around a table. They've created a power
point outlining some radical ideas for the future of defense technology.
They're eating Chick fil A and Taco Bell and exploring
the idea that what the United States really needs is
a real life version of Stark Industries from iron Man

(01:40):
to build defense technology that would protect the United States
from future threats. Fast forward. That brainstorming session led to Andrel,
a defense technology company that launched in twenty seventeen. Now
it's a billion dollar company and at the center of
the debate when it comes to the future of war.
One of its founders, Tray Stephens, spends a life a
lot of time thinking about the philosophy of war, how

(02:04):
technology is transforming it, and how we're going to protect
ourselves as a nation. Expect rigorous debate, unpopular conversations, uncomfortable scenarios,
some talk of superheroes and science fiction, and a framework
to talk about war where the frontlines of futuristic battlefields
are blurred and technology is leading the charge. I'm Laurie Siegel,

(02:27):
and this is first contact. Well, generally I talk about
my first contact with folks, and I have no first
contact with you. This is the first time, Trey, that
I've ever met you, So welcome to first contact.

Speaker 2 (02:44):
Thank you for having me.

Speaker 1 (02:44):
But this isn't my first contact with this idea of
like war and ethics and defense technology. Because I know
your co founder, Palmer Lucky. I've interviewed him many times.
He's an interesting dude.

Speaker 2 (02:56):
He is, yeah, super super interesting, super unique guy.

Speaker 1 (03:00):
Would you describe.

Speaker 2 (03:01):
I think he's one of these people that's thought deeply
about most things. Rather than having, you know, high conviction
but shallow knowledge, he has high conviction and lots of knowledge.
There seems to be this high correlation between people in
the tech community that are kind of like obsessively passionate
about topics and slightly eccentric and personality that makes them

(03:24):
a great fit for entrepreneurial endeavors. And Palmer is totally
one of those people.

Speaker 1 (03:30):
He's the founder of Oculus we should mention for our listeners,
which is virtual reality company. Oculus sold to Facebook for
three billion dollars. And I remember my first time meeting
him was at Web Summit. It was years ago in Ireland,
and he always wears Hawaiian shirts and he always says
things that they definitely get him into trouble. And he

(03:50):
ended up leaving Facebook because of some controversy and starting
this company that you started with him, that we're going
to get into all about defense technology, which is also
interesting and controversial in its own right. And it's called ANDROL.
And I remember the last time I was at Yale's headquarters,
it was probably a year and a half ago. He's
like walking around barefoot. I just want to like paint

(04:11):
the picture because it is quite fascinating of all sorts
of like really fascinating technology that's like the future of warfare.
And it's in la it's not in Silicon Valley. And
you have a founder with a Hawaiian shirt who's like
kind of like a billionaire. I think he's a billionaire, right,
depends on who you ask. I guess, okay, unconfirmed, but
multi multimillionaire. And you guys are just dealing with these fascinating,

(04:34):
fascinating issues that a lot of people in Silicon Valley
shy away from. So that's how I'll kind of set
it all up. Was that Is that a fair way
to set it up?

Speaker 2 (04:42):
Yeah? I think the only thing that Palmer would be
disappointed if I didn't point out is that Palmer did
not leave Facebook. Palmer was fired from Facebook. That is
an important distinction. But yeah, I mean he is, as
I said before, eccentric. He very rarely wears close toad shoes.
I've seen him get in trouble with this before. We
were on an off site together and he was playing

(05:04):
laser tag on a mountain bike, which is something that
I guess you do, and going downhill very rapidly and
went over the handlebars with his Hawaiian shirt open and
barefoot of course, and ended up like pretty badly scraping
up his chest and cutting his toe on I'm assuming
the pedal. But the kind of cool thing about him

(05:26):
is that he has that playful eccentric side, but he's
also incredibly serious and so there are very clear reasons
why he was on the mountain bike, why he was
charging downhill, what his strategy was for doing that. So
you know, this is like the kind of the combination
that I think makes him so unique is that it's
deep seriousness, deep strategy, but also eccentric and playful.

Speaker 1 (05:47):
When we talk about Palmer's background, your background is like
completely different in a fascinating way, and so as co
founders is super interesting because your background is in intelligence, right,
and I saw that you were a student during nine
to eleven, and I read that that deeply impacted you,
and you ended up working for government, so you didn't

(06:09):
quite have the Silicon Valley background that a lot of
the people you're working with had, right.

Speaker 2 (06:14):
Yeah, I was a senior in high school when nine
to eleven happened. Prior to that day, I thought I
was going to go into journalism, actually, and I remember
sitting in my principal's office in high school watching the
TV and talking to him about what it meant for
the world, and really decided on that day that I
wanted to go into a career in service to the country.

(06:36):
Had kind of toyed with whether or not that meant
going to the Air Force Academy and trying to be
a fighter pilot or going into the intelligence route, probably
via Georgehown, which is where I ended up going. Ended
up doing the intelligence thing, mostly from a probability perspective,
just realized that I might not end up getting that

(06:56):
fighter pilot billing and could have ended up doing something
far less interesting in the air. But yeah, it all
kind of goes back to that day.

Speaker 1 (07:03):
And so talk to me about the government days. What
exactly were you doing?

Speaker 2 (07:07):
Well, obviously I can't say exactly what I was doing.

Speaker 1 (07:10):
Nice if we were going to talk around it in code, could.

Speaker 2 (07:14):
You we were going to talk around it? I was
working the counter Terrorism Mission specifically focused on computational linguistics
for Arabic script based languages. Okay, I'll explain all of it.

Speaker 1 (07:25):
Okay.

Speaker 2 (07:27):
So basically I studied Arabic in college. And one of
the things that you find very quickly is that the
language works very differently than the Romance languages, and the
naming conventions are way harder to figure out, and they're
less unique than English names. So the most common name
in the English language, I think it's John. It's like

(07:48):
three percent of all English speaking men have the name John,
either first, middle last. In the Arabic speaking world, something
like over eighteen percent of Arabic speaking men have the
name Mohammed. So that makes name matching incredibly challenging. So
if you have a name like Mahmoudabass, that could be
literally like hundreds of thousands or millions of people with

(08:10):
that exact name Mahmudabas. And so the job for an
intelligence officer is significantly more challenging when dealing with these
foreign languages because you have to figure out is this
person actually the person that I'm trying to get reporting
on or is it just another person that has the
same name. And so I was working on trying to

(08:30):
resolve those entities with one another across a vast amount
of databases, which is a pretty tough problem.

Speaker 1 (08:37):
So, and was there anything that was like, Okay, I'm
leaving the government, I'm going to Silicon Valley and working
at Palenteer. Was there anything that happened that was kind
of the catalyst.

Speaker 2 (08:46):
I think it's just like the grinding down of bureaucracy.
I certainly never thought that I was going to work
for a Silicon Valley venture backed tech company when I
graduated from college. That was never part of the plan.
In fact, I remember joining Palenteer and asking really crazy
questions like what is the stock option, what is equity?
Why does that matter to my comp what is a

(09:07):
vesting schedule? I knew nothing about how these things worked,
and I think basically coming to the conclusion that I
wanted to be at a place where I was surrounded
by a bunch of people who were way smarter than
I was, who were much better at me at the
things that I was bad at, where I could add
unique value and that moved significantly faster than the bureaucracy

(09:27):
of the government, all the while still being able to
work on the mission that was really important to me.

Speaker 1 (09:33):
Palenteer. For folks who don't know associated with Peter, t like,
did you meet Peter or was it that you got recruited?
How did that work?

Speaker 2 (09:40):
I did not meet Peter. I actually got into Palenteer
from the guy that gave the demo to me when
I was working in the intelligence community. It was a
really small team at the time. I think there were
probably like less than twenty people at the company, and
I saw a demo, got really excited about it, kind
of pushed internally to be able to use it, told no,

(10:00):
and then ended up you know, jumping ship and joining
the company really early on.

Speaker 1 (10:04):
And how did you end up meeting Palmer?

Speaker 2 (10:07):
Yeah? So I got to know Peter really well over
my time at palent here, and in twenty thirteen he
asked if I would be interested in coming to join
Founder's Fund, which is a Silicon Valley venture capital firm
that Peter and some of the other PayPal founders started together.

Speaker 1 (10:23):
And we should also say like Founder's Fund for folks
who don't know is an interesting venture capitalist fund in
Silicon Valley because they say unpopular things sometimes or invest
in things that are not just standard, Right, Is that
a fair thing to say?

Speaker 2 (10:37):
Yeah, that's totally fair. We have a manifesto on our
website that I think when it first came out was
pretty unpopular. People thought we were a little crazy. The
cool thing about it has been over the last really
fourteen fifteen years that the fund has been around, these
ideas have become actually kind of weirdly popularized.

Speaker 1 (10:56):
What was the manifesto.

Speaker 2 (10:58):
It's a kind of long form essay on how we're
thinking about the future and how tech has stagnated and
how venture capital as an industry hasn't served entrepreneur as well,
and kind of diving into all of those reasons. But
you see some of these things cropping out of it,
like the founder friendly ethos of a venture fund. This
idea that actually the founders are the most important single unit,

(11:22):
elemental unit of a successful venture and you should as
a fund do the things that are optimizing for helping
those people. That has become super common, Like everyone talks
about being founder friendly as a VC, but you still
see really great funds that are firing CEOs, that are
taking aggressive governance stances against them, and this is something

(11:45):
that we foundationally will not do. We will not vote
against founders, we will not fire founders. We believe that
we should be optimizing for upside and not trying to
mitigate mediocre outcomes. You know, there's a bunch of other,
like more specific ideological things that you've probably seen on
Twitter or heard in the media about the positions that

(12:06):
we take. But yeah, we are We're pretty different, I
would say.

Speaker 1 (12:09):
Right, And so you you met Palmer and you guys
started talking about war and defense, not initially, so what
were those initial conversations, Like.

Speaker 2 (12:19):
Yeah, Founderson was the first institutional investor in Oculus. So
we've known Palmer since I think, you know, he was
like a teenager at that time.

Speaker 1 (12:27):
I mean, and by the way, let's be honest, he's
not that much older now. He's like twenty six now,
so or twenty.

Speaker 2 (12:31):
Seven, he's twenty seven, but but yeah, he's not that
much older now, that's true. Yeah, I met him, was
totally blown away by Oculus, bought one of the original
dev kits, have kind of enjoyed interacting with him, just
because of his passion and creativity, and had a couple
of conversations with him about this search that I was
doing at founders Son. So when I first joined the fund,

(12:54):
not knowing anything about venture capital, not knowing anything about
how the tech community works aside from my one experience
at Palenteer, I decided that I wanted to go and
find the next palanntaer SpaceX SO. Founder's Fund is a
large investor in both of those companies, and the theory
was those success stories should breed other success stories, and
so I went out and looked at every company I

(13:15):
could find that was doing business or interested in doing
business with the government, and ended up not really finding anything.
And I was talking to Palmer about this and saying, like,
you know, it's pretty crazy, given that our adversaries Russia,
China to a lesser extent but still present, Iran, North Korea,
We're doing things that were really challenging the strategic positioning

(13:36):
of the United States and our allies globally. And Palmer
agreed and had a ton of thoughts, as Palmer does
about many things, and I basically had this conversation with
him where I said, you know what, the United States
really needs is Stark Industries from the Ironman movies. We
need a company that is going to be well capitalized,

(13:57):
that will build products for the community, not requirements programs.
We're not doing a services contract on a cost bus basis,
but building products vertical and selling them directly to the
defense community to advance our ability to defend and deter.
He got really excited about that idea, and that was
kind of the early phase of him departing, as we said,

(14:20):
from Facebook and starting this company in twenty seventeen.

Speaker 1 (14:23):
And I read that there was like a brainstorming session
that happened. I think there was Chick fil A involved
some crazy PowerPoint where you guys are putting the future
of warfare on there and there's all sorts of different
future of war slides. Can you just like take us
to that what that looked like? Take us to that
brainz I mean, because you're speaking in like very above
the whatever, like Silicon Valley terms, Like, just take us

(14:45):
to what that brainstorming session looked like. We're in Orange
County because that's where Palmer lives. Yep, you guys are
all sitting around eating Chick fil A. I'm from the South,
so I can get on board with that, you know,
explain what that jam session looked like. What this PowerPoint
of the future that could be borderlin dystopian if we're
not careful, we'll get into that looks like and like
what this whole thing happened? Just to just go there,

(15:06):
sure don't speak in Silicon Valley terms, that's what it
actually looked like.

Speaker 2 (15:09):
Yees. So it was at Palmer's house. This would have
been I think in April or May of twenty seventeen.
We had invited a bunch of our smartest friends that
we could find that could be potentially interested in leaving
their current jobs to come do this with.

Speaker 1 (15:24):
Us and anyone interesting that we'd know a.

Speaker 2 (15:27):
Lot of the early employees of Andrel. So we had
Matt Graham, who is the co founder COO, Joe Chen
who was I think one of the first employees of Oculus,
with Palmer and Palmer and I and Matt Graham have
put together this deck that you're referencing that was basically like,
what would a stable world in the future look like

(15:48):
if we were actually able to deter violence? Like how
do we stop bad things from happening and do that
in a way that preserves our leadership over time. There
was Chick fil A. I will say that I'm allergic
to chicken. Palmer knew that I was allergic to chicken,
and Nicole Palmer's now wife then girlfriend went to Taco

(16:09):
Bell and bought an entire bag full of bean burritos.
And so everyone's eating chicken and I'm sitting there.

Speaker 1 (16:15):
Just like good kept it class, just.

Speaker 2 (16:17):
Crushing bean burritos. Yeah, Palmer is is an avid fast
food aficionado, and so he knows all the all the
places to go and the things to do. But that
was our dining of choice that day, and we kind
of just went through this whole plan, like what would
Stark Industries look like?

Speaker 1 (16:33):
I love that you going back to like Iron Man?

Speaker 2 (16:35):
Yeah, you know.

Speaker 1 (16:36):
I mean it's like, when we're thinking about the future
of defense technology, how we're going to protect ourselves from
like Russia and China, Like it goes down to like
Chick fil a, bean burritos and Iron Man and like
a guy who likes to be barefoot. And I don't
mean this in like a snarky way, like this is
like what we're painting the picture of It's kind of interesting,
you know, I think.

Speaker 2 (16:53):
I think this is one of the things that most
people don't realize about the defense community over really like
the last hundred years, is that these moments, these kind
of crucial moments in our history have been defined by
like founders, by entrepreneurs. Maybe they worked for the government,
maybe they didn't. But you had Benny Shriver with ICBMs.
You had Kelly Johnson with the U two, the SR

(17:15):
seventy one, the F one O four. You had Admiral
Rickover with the nuclear Navy. You had Howard Hughes and
all the things that he did in the aviation space.
And we've kind of gotten into this weird, like quasi
communist state where like no one's actually responsible for anything,
Like there are no founders, it's just bureaucracy that's running
all of these multi, multi, multi hundred of billion dollar

(17:39):
defense programs. And so I think the usage of Stark
Industries as the analogy is actually really powerful because it
says Tony Stark, the person, is actually the company. And
you see this tension that happens over the course of
all of the comic books and the movies where he
keeps trying to turn over responsibility for certain things actually

(18:00):
like it's him that makes this stuff work. And I
think Palmer has this powerful personality where he's able to
pull together really incredible people, really incredible engineers, work through
a vision and actually deliver the products that we say
we're going to deliver.

Speaker 1 (18:15):
Right, Okay, So go back to that day, and so
you guys had put together this PowerPoint and what did
it have on it? It had all these different what
the future of warfare looked like? What did the future
of warfare look like?

Speaker 2 (18:25):
I think future of warfare is probably the wrong term,
because the idea is actually to prevent warfare, right, Like
you want to deter violence, you don't want to have
more violence. Yeah, And so the PowerPoint was kind of
talking about what are the near term things that we
can do using existing technology, what are the medium term
things that we can work on that are probably going

(18:46):
to be possible in a five to ten year time horizon,
And what are like the science fiction ideas that would
completely change the game and force a paradigm shift in
international affairs. And we just had this kind of crazy
brainstorming session where everyone was pitching in their ideas saying,
actually that's not possible, or actually that's more possible than
we think that it is, And we were trying to

(19:08):
come to a rough agreement on what the first couple
of years of a company would look like if we
ended up starting it. And I think everyone left that
day feeling not only really excited about the vision and
commitments in hand for many of them, but also an
idea of the types of things we were going to
be able to build and a rough prioritization of what

(19:28):
that would look like.

Speaker 1 (19:30):
And not to see the obvious though when we're talking
about the superhero bit, but you are coming with a
founder who was pushed out of Facebook for controversial comments
he made in like alt right forms. You have talent
here which you were a part of, which has come
under fire for surveillance issues and that kind of thing,
and is associated with Peter Teel, who's most certainly a

(19:52):
controversial figure. Nonetheless, So are you guys the superheroes to
do this? And how did you prepare to respond to
people looking at this team and saying, are we going
to trust this team to build and defend our future?

Speaker 2 (20:06):
Yeah? I think everyone on the team has wildly diverse
ideological political beliefs. You know, Palmer's political beliefs are not
indicative of the rest of the founding team nor the company.
You know. The kind of the crazy thing about this
whole conversation is that by us standards, Palmer is pretty
boring in his political beliefs. Like he's a he's a libertarian,

(20:27):
he believes in limiting government regulation, he believes in, you know,
free market economy, and his engagement in the political sphere
is kind of limited to pursuing those things. I think
he's gotten a lot of really unfair press treatment that
does not resemble reality. And you know, I think the

(20:48):
company by and large is very aware of that, from
a founding team to the employees, we're all very aware
of that. The important thing about the association with all
of those kind of different aspects that you just mentioned
is that we have open, harsh dialogue internally about everything
about the products that we work on, about the programs

(21:09):
within the governments that we work on, about the countries
that we work with. And that would only be possible
if the people involved, particularly in leadership, were open and
receptive to that rigorous debate. And I think this is
something that Palmer found at Facebook, is that that was
not a culture that was open to rigorous debate. In fact,
it was only open to a single ideological bent that

(21:32):
is militarized in a way that you know, I think
the American people should all be pretty concerned about.

Speaker 1 (21:39):
I think it's interesting. The last time I interviewed Palmer,
I had just finished doing a series of Conservatives undercover
in Silicon Valley, and like literally interviewed conservatives who didn't
feel comfortable coming out and talking about being conservative Silicon Valley.
We had to do it in shadow. This is before
I mean, there's certainly a culture war playing out, which
is a whole separate conversation, but we're seeing that on
a grand scale. Now we're going to take a quick

(22:02):
break to hear from our sponsors. But when we come back,
what exactly is Andrel building a look inside the technology. Also,
if you like what you're hearing, make sure you hit
subscribe to First Contact in your podcast app so you
don't miss another episode I want to talk about Andrel's

(22:34):
just in general, like what you guys are doing. So
Palmer's like the product guy, and he's like building out
crazy interesting technology and you're thinking, I think a lot
about these ethical issues and some of the more philosophical
issues as we kind of head into this future for
folks who don't know andrel. First of all, isn't this
the Lord of the Rings reference? If I'm getting it correct.

Speaker 2 (22:56):
It is. Yeah, it's Elvish and Lord of the Rings
for a Flame of the West.

Speaker 1 (22:59):
Okay, reasoning behind that.

Speaker 2 (23:03):
It seemed really appropriate, still seems really appropriate. I mean,
Androl was the sort Narsal reforged. Narsal was the sword
that cut the one ring from the hand of Sauron
in the early ages, and it was reforged as Androl
to be welded by Aragorn during the Fellowship. So it
has kind of a storied history in the series where

(23:24):
Tolkien fans seem to make a lot of sense.

Speaker 1 (23:27):
And so explain kind of the premise of what you
guys are building.

Speaker 2 (23:30):
Yeah. So again, our kind of view of the future
is that if we're not very intentional in the West
with building the technology that's required to protect and preserve
our values in our way of life, that these technologies
are going to be built by our adversaries, and we
shouldn't trick ourselves into believing that our adversaries are in

(23:53):
some way our moral equals. They're not, and we want
the best technology that is going to deter conflict to
be controlled and operated by the people that have the
value system that we share. And so, you know, anything
that we can build that fulfills that mission while being
very cognizant of the ethical considerations, the very real, very

(24:16):
meaningful ethical considerations that are required as those things are built,
is something that we're really interested in working on.

Speaker 1 (24:21):
And so I mean, in a most baseline thing, you
guys are building out consumer tech products, generally with artificial intelligence,
that you sell to our allies and the government.

Speaker 2 (24:30):
Yeah, not everything is consumer. In fact, a lot of
the hardware that we integrate into our systems is not
intended for consumers. It's enterprise grade equipment. But yeah, basically,
the way that you can think about it is that
we're building both hardware and software to achieve mission ends
at the lowest cost possible for the taxpayer. And so
rather than building a bunch of bespoke technology on as

(24:54):
I said before, these cost plus contracts like the F
thirty five, the four class aircraft carrier, or even these
bespoke software programs like the Defense Travel Service and all
sorts of stuff like that, we're trying to take the
best of class that exists off the shelf, integrate it
with some things that we do have to build ourselves
because it's not available commercially, and then turn it into

(25:14):
a product that works today as well as we say
that it works and turn that over to the government
customers to use for their mission.

Speaker 1 (25:21):
Can you give us a sense of the products you're building,
that the stuff that's out to market now, and who
you're working with.

Speaker 2 (25:26):
Yeah. So our first product is what we call a
centry tower. It's a thirty some foot tower that has
integrated sensors. So instead of security officers sitting in a
dark room with hundreds of little CCTV feeds, the tower
is just telling the operator when something is happening that
they need to look at. This is really critical for

(25:47):
all sorts of critical infrastructure, whether it's military bases, oil
and gas facilities, national borders, whatever it might be. Second
product that we built is basically a tower that flies.
We call it ghost a helicopter that has many of
the same sensors. It's fully autonomous, so it takes off,
executes a mission, returns to base, and provides that same

(26:07):
level of autonomous operation and computer.

Speaker 1 (26:11):
Vision, and what does that meant to do and where
do you expect to see that ploy?

Speaker 2 (26:16):
Yeah, the same thing as the tower, except it it moves.
And so a tower by its very nature has a
range that it can operate in. And if something is
moving outside of that range, or you expect that things
are moving it more velocity, you might want a helicopter
to be able to track and pursue. There are a
lot of remote operations, like special forces units in the

(26:37):
field that might want forward notice of things that are
that they are going to be encountering on the road
ahead of them. So there are a lot of potential
applications for ghost. The third product we call ANVIL, which
is a kinetic interceptor for unmandarial systems. So I'll explain that.
So most people, I'm assuming have probably heard about the

(26:58):
threat of drones in airspace, particularly in military environments where isis.
Other adversaries are taking standard consumer drone technology like dgimavix
and stuff like that, putting grenades on them, or just
flying them for surveillance operations, and this has become a
terrible risk to our service members abroad. Taking them down

(27:22):
initially is as easy as jamming their radio signal and
forcing them to return to where they came from, or
denying GPS in some environments, things like that. But as
the drones become more and more autonomous where they don't
really emit any signals that you can jam, it becomes
harder and harder to remove them from your airspace. And
so kind of the crazy idea that Palmer and the

(27:44):
rest of the team had was to use another drone,
our own drone with a terminal guidance system on it,
to lock on to the adversarial drone and fly into
it at high speed, kind of like a flying bowling
ball that's intended to knock them out of the sky.
And so all of these things kind of paired together,
where you have static passive surveillance from the towers, you

(28:04):
have a dynamic mobile surveillance capability from the helicopters. You
have the interceptor that can take out aerial platforms that
are approaching your facility or whatever it might be. Are
like layers an overall platform that we call lattice, which
is the battlefield management, command and control software that sits
behind all of it.

Speaker 1 (28:23):
And fast forward from this conversation you guys were having,
you know, in Orange County and talking about all this
kind of stuff to now like who are you doing
business with? Department of Homeland Security? Like who you know?
Who are you guys selling your technology to.

Speaker 2 (28:36):
Yeah, Predictably, our largest customers the US Department of Defense,
numerous branches, so we're working with special Forces, We're working
with the Marine Corps, We're working with the Air Force,
the Navy, So that represents the majority of our business.
We do also have some work with Federal la Enforcement
and the Department of Homeland Security, and then we're working
with some of our international partners, close allies like the

(28:58):
United Kingdom and very similar defense mission sets.

Speaker 1 (29:01):
The last time I interviewed paulm I must have been
I would say, like over a year ago. The headline
grabbing thing I remember for you guys was the first
product you guys were building was like the virtual border wall,
And I think that's the thing that people looked at
as are you building the digital wall for Trump? Because
this was the first thing you guys put out publicly.
A lot of people looked at Andrew and looked at
what you were doing and asked you some of these

(29:23):
questions about is this ethical what you're building and is
this the right thing? So I'd be curious to know
where that stands. Now.

Speaker 2 (29:31):
Yeah, the work that we're doing with customs and border
protection within the Department of Homeland Security is not a
policy initiative. We don't have policy views on how border
control should be managed. But we do believe that within
the constructs of a democratic government, we should be supportive

(29:51):
of the things that the government is doing, the mission
that they're executing. And I think in this case, Democrats, Republicans,
everyone kind agrees that you have to have some form
of border security, just like literally every other country in
the world, and that when you're making decisions about what
policies you're going to accept with regards to that mission set,

(30:14):
you need to have data. You need to know what's
actually happening. You need to know what kind of people
are coming across the border. Is there actually the flow
of drugs and black market weapons that people think there are,
Are there unminarial systems that are posing a threat to
our civilian population. You know, there's all these questions that
you want to have answers to, and I think that

(30:34):
kind of an indefensible position would be, actually, we don't
need information, We don't need any data. We don't need
to know what's happening. We should just let everything happen
that's happening. And so I think this is kind of
why we've gotten such bipartisan support for the technology, is
that we're not making a political assertion.

Speaker 1 (30:52):
How do you think ethically about the data? So the
data that you guys gather, I'm sure you're gathering a
lot of this data right from everything it's picking up.
And so what's the standard there?

Speaker 2 (31:02):
You we do not own the data where data belongs
to the customer. And by the way, that's in every case,
every use case, aside from our own technology that we're
using in test environments. So yeah, basically they get the
feeds off of the towers to the helicopters or whatever
else they're using. Human beings, actual people who have a
job in a mission. In that job are making decisions

(31:24):
about how they respond to that data. So if you
see an unmandarial vehicle with drugs attached to it, which
is something that happens periodically, they can decide are we
going to pursue that, are we going to try to
shoot it down? Are we going to try to follow
it until it runs out of battery. Are we going
to let it go? You know, these are decisions that
human beings are making at the end of the day.
So yes, absolutely, there are ethical questions that are involved

(31:47):
with each of the products that we build, and there's
a process by which I think you go through thinking
about how that mission is realized and who are the
people that are involved in making that decision.

Speaker 1 (31:57):
So can you explain that you talk a lot about
this just war theory. You just explained to us, like,
what's the just war theory?

Speaker 2 (32:03):
Yeah, so just war theory. This is a many centuries
old framework for evaluating how and when you go to
war and how war is conducted. And you know, the
general concept is that there are all of these principles
that we can apply to any technology. They apply the
same to a knife, an arrow, a bannet as they

(32:26):
do to autonomous robots in battle. And so I had
wrote an essay that it seems like you've probably read
about defense ethics. So there are four specific principles that
I talked about in the essay. The first is last resort,
so the principle of war is last resort. The second
is discrimination, principles of discrimination and proportionality, and the last

(32:49):
is right intent or just authority.

Speaker 1 (32:51):
So this is how you look at the ethical decisions
that you guys are making and the products you're building,
and what kind of framework you're going to think about
when when you're building things and deploying them.

Speaker 2 (32:59):
That's right. So oftentimes, and even some of your previous guests,
say Professor Surf who had on to talk about brain
modifications and things like that, he talked about some of
the ethical and regulatory structures that should be put in
place for this. People often talk about how we need
some sort of ethical framework for evaluating autonomous systems, artificial intelligence,

(33:21):
quantum computing, neuromodifications, things like that. Yeah, and my argument,
the core argument is that actually there are frameworks that
exist today. Just war theory happens to be a really
good one for applying these principles, and you just have
to think critically about how those new technologies fit into
each of those paradigms.

Speaker 1 (33:38):
First of all, a little birdie told me that you
do these dinners at your place. I have a feeling
that this theory you guys jam on this like maybe
like these dinners or some sort like can you give
us like specifics around the technology you're building now, and
if you don't mind, just like be as honest as possible,
because I think that you know, I understand that these

(33:59):
are hot button issues and that like you know that
there can be headlines out of them and everything. But
the stuff you're dealing with is really fascinating, right and
the issues that you're talking about night. I like what
you said about this idea of having rigorous debate and
saying things we can't say out loud and just kind
of like yelling at each other about theories and this
and that. So can you take us to this compound?
It's like two hours outside of La right or where

(34:21):
exactly is andre Ell?

Speaker 2 (34:22):
It's like, depending on traffic, forty minutes, Okay, five hours,
I don't know, there's so much traffic there.

Speaker 1 (34:26):
I'm just going to go with like five hours. But
take me to like how you apply this theory into
something really specific that you've worked on, because I can
imagine for you it's challenging and interesting and you really
are kind of thinking ethically about building out this technology
that you guys are going to be deploying and selling
to governments and putting in the hands of other people,

(34:48):
and seeing how it will be used. So, like, how
has this theory worked specifically, because when you talk about
my interview with Moron, like he talks about a lot
of this stuff in theory, you're actually doing a lot
of this stuff in reality.

Speaker 2 (35:01):
Yeah, that's a really good point. This is something that
has to be approached with a high degree of seriousness
because it does have real world implications. This isn't a
theoretical conversation around you know, some defense technology that might
exist in ten years. It's like, these are the things
that are being deployed today, not only by ANDREL but
by a lot of other companies as well.

Speaker 1 (35:20):
And so going to the technology that you guys are
building specifically, say like lattice, right, this technology that's already
been deployed at the border. Right, take us through the
ethical framework. What did you decide not to do? You know,
where did you guys decide to draw the line?

Speaker 2 (35:35):
I would say that deciding where to draw the line
implies that there was some sort of conversation about accelerating
beyond what we were building as a product, which wouldn't
be an honest description of what happened. You know, I
think you could extrapolate a million different things ranging from
like some sort of laser in kind of a one
to one way or a nuclear weapon and everything in between.

(35:58):
With literally any technology that you built, anything that has
a camera on it, you could say, like that could
be used to target for firing many nuclear warheads. I
don't know, It's like that was never part of the conversation.
The entire concept behind the Centry tower was around data collection.
That was it from the very beginning. It was never
about anything more than data collection.

Speaker 1 (36:18):
I guess I look at like maybe the inevitable. Do
you have conversations about facial recognition or we wouldn't go
near that. I mean where, like where do you draw
the line ethically or even when we talk about ghost
right something else you guys are deploying, Like what are
the other ethical conversations you guys have about those specific technologies?

Speaker 2 (36:35):
Yeah, no, that is a really good question, And I
think this is like one of these examples where the
values that we hold as a country are wildly divergent
from our adversaries, like facial recognition, really really good point.
Is it necessary to the mission to be able to
at long range determine who people are algorithmically? No? Not

(36:58):
really like what our military wants to do, what our
officers in the Department of Homeland Security want to do,
is they want to know when something is happening. They
don't need information about who everyone is as identified by
some sort of by some sort of algorithm. This is
the thing that you know, China is doing, and this
is the ideology and technology that they're exporting to dozens

(37:22):
and dozens of countries in partnership with companies like Hickfision
and Dawa and sence Time and megv and huaweih. I mean,
that is the Chinese ideology that they're exporting. That's not
something that we're interested in doing.

Speaker 1 (37:36):
Does it worry that you build out this technology? I'm
sure you get this question a lot. You know, you
develop this technology that you believe you do it in
an ethical way, but that it could be used in
a way that is not ethical. That someone could take
what you're doing and then add facial recognition to the
technology that you've deployed.

Speaker 2 (37:52):
Yeah, I think this is why it's really important when
you're making sales of technology in the defense space to
consider the end user of that technology and what processes
exist to control that. One of the really important things
about this ethical conversation to us at andrel in the
context of the United States, is that, you know, we

(38:13):
shouldn't take the democratic process for granted. We are blessed
to be in a place that has a system of
checks and balances, a very large non political civil servant
bureaucracy that exists that makes these decisions, and we have
an ability to change policy at the ballot on a
every two year basis, basically at a federal level. And

(38:35):
so we don't believe that the quote enlightened technocrats in
Silicon Valley should have the ability to decide for the
American people what technology our government should be using and
what technology our government shouldn't be using. You know, we
believe people are really smart and that the government has
the ability to make changes as needed as this tech

(38:57):
is devloid.

Speaker 1 (38:58):
One of the most interesting questions I think when it
comes to the future of tech and war is this,
I guess I come back to this line. Maybe it's
a little dramatic, but it's like AI to kill or
not to kill? Right, Like this idea that you guys
could be building out autonomous systems that can make the
decision to kill. Will you do that?

Speaker 2 (39:21):
I mean, our policy internally has always been that in
almost every case, a human in the loop makes a
ton of sense. There are certainly cases where they might
not even involve human casualties, where you really can't have
a human For example, if you have hypersonic missiles flying
at you, you have like a split second to make

(39:41):
a decision about whether or not you're going to shoot
it down. And these are the types of things again
like iron Dome, that have been driven by computers. And
so there's this constant conversation that seems to be happening about, well,
in what future world like, will we just have to
make these decisions? Actually, like we've been doing this for
over a decade. There are computers that are making kinetic

(40:03):
decisions on a regular basis today when it deals with
human life. I think it raises the stakes quite a
bit in the engagement of humans in the loop and
that decision making, which does feel really important. I think
one of the conversations that doesn't seem to get enough
airtime is the idea that you can't just wait for

(40:24):
all of the theory around the ethics to be worked
out before you build something, because our adversaries will build
it and if we look back at history, you can
see that the welder of the technology, the person that
builds the technology and owns the technology, is really in
control of the standards and norms that are used to
deploy that technology. You can see this with the way

(40:46):
that China is approaching regulation around five G with the
International Telecommunications Union. They have a seat at the table.
We do not have a seat at the table. And
if you go into these conversations assuming that we're going
to somehow be able to, you know, push our agenda,
I think you will find in history that has been
the wrong assumption. Another example of this is the Intermediate

(41:06):
Range Nuclear Forces Treaty I inf that we had with
the Soviet Union, where we agreed with the Soviet Union
that we wouldn't build intermediate range ballistic missiles. China was
never a party this treaty. They move forward with building
intermediate range ballistic missiles. Then Russia, when they realized that
was happening, they also began building intermediate range ballistic missiles.
And so we put not only ourselves to the disadvantage,

(41:29):
but we put our service members in the Pacific at
risk for their lives because we were beholden to a
treaty that was not being followed, that was not being
taken seriously. And so whether or not we build AI
for defense, whether or not we build autonomous systems for defense,
whether or not we build better precision fires for defense,
whether or not we build quantum computers for defense, other

(41:51):
people are going to build these things. And we want
to be in a position where we have a seat
at the table talking about how those technologies are being used.

Speaker 1 (41:58):
So take me to that seat at the table, so
as you have a seat at the table, you guys
are sitting there talking about these things. I remember the
last time I interviewed Palmer, like asking him that same
question about like will you deploy technology that could make
the decision to kill? And I remember, I think I
remember him, you know, saying right now, no, But that
doesn't mean in the future we won't, you know. And
I thought that was really interesting because I do think

(42:20):
it comes with all these really interesting ethical issues of
at what point and who's coding the decisions it's making,
and AI is so flawed in general, But so have
those conversations moved forward with you guys. I mean, what,
when's the last time you spoke about it or what
was the nature of it.

Speaker 2 (42:38):
I can't think of any specific examples of tech that
we're building right now where that has been an issue.
But I think Palmer's answer is correct. I mean, there
are a lot of versions of just war application that
do involve lethality. It's very, very hard to predict the
future to say, like what the conflicts of tomorrow will be,
and you know the types of decisions technologists will have

(43:01):
to make in order to sustain an advantage in those conflicts.
But to the extent that the tech is deployed as
the last resort, to the extent that it is more discriminate,
to the extent that it is more proportional, to the
extent that it ensures last right intent and just authority,
and it ensures that human flourishing can continue in a
more abundant way. Absolutely, there are I'm sure there are

(43:24):
applications of technology that will have lethal intent that fulfill
and check all of those boxes that said, I'm sure
there are also technologies that will not. And those are
the technologies that not only would I not build, but
I also would not invest in them.

Speaker 1 (43:38):
We're talking about the future of autonomous weapons and make
AI making the decision to kill not to kill. Are
you worried that if it makes a decision in a
split second, like traditionally it's been a human making that decision,
and we can put that on somebody that in the
future that this if AI makes the wrong decision, that
could the liability could fall on you, guys.

Speaker 2 (43:58):
I think liability is a complex issue with all technology,
whether it's you know, self driving cars in the consumer
world or you know military technology and the defense world.
Of course, like the liability needs to be worked out
at some point, whether it's through regulation, whether it's through
some sort of legislative action. I don't view that as
something that would deter me from wanting to work in

(44:19):
this space, because again, I believe in the democratic process,
and I believe that there will be some sort of
fair reckoning for these things. I think one of the
things that has kind of always been inspiring to me
in this is that science fiction has kind of thought
through a lot of these ethical challenges well ahead of
its time, like well well well ahead of its time,

(44:40):
and so science fiction is an awesome place to go
to start talking through some of these really complex challenges,
like for example, the prime directive in Star Trek, Like
you know, Elon's over here talking about interstellar travel and
things like that. It's like Star Trek has worked through
more academic research on the impact of visiting neighboring civilizations

(45:03):
than any like university has. And I think these are
the types of things that we can and should be
looking to fiction to partially inform so that we can
be more prepared you know, at the eventual moment that
those technologies come to fruition, if they come to fruition
at all. But I think Star Trek is a great
example of that.

Speaker 1 (45:23):
We're going to take another quick break to hear from
our sponsors. But when we come back, a mishap with Lightsabers, yep,
you heard me correctly, Lightsabers. And if you have questions
about the show, comments, really anything, you can text me
on my new community number nine one seven five four
zero three four one zero. So what would you say,

(46:07):
like as an ethicist and someone who's kind of thinking
about these things, and you say, like, let's go specifically
to andro, like if this is the kind of thing
that could come down the pipeline, Like what would you
want people to keep in mind when thinking about deploying
these systems with AI that have the ability to make
these decisions, Like, what is the conversation we should be
having nationally, Like, because you talk about the government, you know,

(46:28):
you need to regulate this, but oftentimes the government is
you know this probably better than anyone light years behind
the technology. So what is the conversation that we should
be having about this kind of thing.

Speaker 2 (46:39):
Well, certainly within the government, the DoD the Department of
Defense has a very detailed rule of engagement that they
followed for a very very long time. This goes back to,
you know, the inclusion of the Geneva Conventions and United
Nations agreements about use of force. And so I think
these types of conversations come naturally to the defense community.

(47:00):
They know how to think about it. In the tech community,
I think it comes way less naturally, And that's where
I have to engage more with people on it, because
you know, these are generally strongly held, very strong beliefs,
with very little data to back them up. People that say,
like in theory, I would you know, I would never
work on these things, but I also haven't considered any

(47:21):
of the implications that lead to those decisions. And so
it becomes more of a conversation about presenting scenarios and
saying what is the most ethical way to move forward
with this? This is what the dinners at my house
are about. These are not I'm not hosting and Roil
employees at my house. That would be kind of a waste.
It gives me an.

Speaker 1 (47:37):
Example, I think, I guess that's what I'm interested in. Like,
give me an example of one of those like those
scenarios that you guys talk about.

Speaker 2 (47:43):
Sure, so let's imagine that North Korea either has like
humans or robots or humans in robots like mech warrior style,
and they just like flood into the demilitarized zone just
like thousands and thousands of objects that are pushing forward.
You have the option of a taking a like serious

(48:06):
kinetic one to many action and you know, firing very
large bombs, missiles, nukes, whatever to eliminate them, not knowing
what's good what's bad, or otherwise not knowing if there's
like a zombie plague that's like forcing everyone to flee
the country, or you can do some sort of AI
assisted like auto turret. So there's like, you know, I

(48:27):
don't know, guns on a swivel that you can kind
of control and they automatically lock on target. Or if
there was an AI that said differentiate between robots between
people and people that have weapons and only shoot people
with weapons and robots don't shoot any people that are
running towards the border without weapons. That's an AI driven technology,

(48:49):
and there is a lethal kill decision involved, But you
could save thousands and thousands and thousands of lives by
executing that strategy. Instead, a human could never make decisions
that rapid with that much data flooding into the system.
There's just no way they could do that and deconflicting
across all those targets at the same time.

Speaker 1 (49:07):
I mean, and the idea is that even when humans
do make these decisions, oftentimes they are tired, fatigue, stress,
and under all of these and in these different situations
of when to decide to make the decision to kill
or not to kill.

Speaker 2 (49:18):
I think if you go in you talk to the
soldiers that served in the last few international conflicts, the
decisions that torment them, that keep them awake at night,
are decisions that they had to make in the blink
of an eye. You know, a vehicle driving at high
speed towards one of their bases. You don't know if
that's a sick child and the father is just trying

(49:39):
to get them to medical care as quickly as possible,
or you know, a car full of explosive material that's
going to run into your base and kill service members
and they have to make these split second decisions about
what to do. They want more information, they want to
be able to make better trioge decisions, and by withholding
that technology for them, we're putting people's lives at resk,

(50:00):
both service members as well as civilians.

Speaker 1 (50:01):
I only play the Devil's advocate on AI because then
I think about how flawed AI can be, how bias
it can be, and how sometimes the algorithm makes these
decisions and someone's on the other side of it and
you're like, wait a second, how did that happen? And
you question that as well. So I think they're equally
very different issues, but issues when it comes to AI
making these decisions, and I think, no doubt the future

(50:23):
will be autonomous systems that are AI driven when it
comes to the future of war. So like that seems
complicated to me as well, since AI is biased.

Speaker 2 (50:31):
Yeah, there's bias in both directions. For sure. Human beings
have biases that they don't even realize they have that
cause them to make decisions. Computers have different sets of biases.
To the extent that we can understand the way that
these models are working, we can correct a lot of
those over time. I don't think there has ever in
the history of technology been a something that was perfect

(50:53):
at the outset, Like there's always room to improve. There
are things that we can do to make the models
more accurate, reduce the bias that's employer sit in them,
And I think that is important work.

Speaker 1 (51:02):
But like to draw a line from me, I mean,
just draw I'm just asking for one line, you know,
like just give me something a certain type of defense
technology that you will not build, Like just like what
is your nozone? Like what is your Like someone says something,
You're like, ugh, not that.

Speaker 2 (51:17):
I actually think these are really easy And there's a
bunch of them. Like one example would be some sort
of like bio plague that you can release into an
enemy territory, right, coronavirus, Like pretty sure that's ethically bad
if that was an adversary launching that attack. Anything that
is disproportionately affecting non combatants is to be avoided in

(51:42):
my mind, so I think that's one. Another is technology
that conceals information rather than making it more accessible to
decision makers. You can look at things like the Great
Firewall in China, or El Pakeet and Cuba blocking out information,
or the denial of Russia for its incursions into the
Ukraine and in Georgia. I think these are all like

(52:04):
examples of places where the government has made a concerted
effort to hide information from their population to defend potentially
a crazy decision that they might make in the future
that's super bad and not something that I would want
to be involved in ethically.

Speaker 1 (52:18):
Has there been something at Andrew that kind of got
left on the cutting room table? You just decided we're
not we're not building this.

Speaker 2 (52:24):
Ah none for ethical reasons, because we haven't really like
edged into any of these crazy territories. Yet there are
some that have been left on the cutting room table
because they didn't work like we thought they should and
weren't worth pursuing. We've had some crazy ideas around, like
real life lightsabers and stuff like that, and it turns
out that some of these things, like the science just

(52:45):
isn't ready yet, by the way, the same can be
said for augmented reality. For decades, soldiers have been really
interested in a heads up display that they can wear
in combat that gives them information on mission objectives, on
where only actors are to prevent them from getting involved
in friendly fire accidents, so on and so forth, map overlays.

Speaker 1 (53:07):
I remember Palmer saying that to me, and I thought
that was really interesting. The last time we did an interview,
who's saying the future almost feels like a bit of
a video game, right, like and he and he has
the video gaming background from Oculus, but like, you know,
you have this almost virtual or augmented reality layer that
shows you everything around you.

Speaker 2 (53:22):
That's right.

Speaker 1 (53:23):
Now You've just added like the idea of a lightsaber too,
So my mind is blown a little bit.

Speaker 2 (53:27):
I think we can probably press pauls in the light saber, don't.
I don't think that's hard.

Speaker 1 (53:30):
How far did you get?

Speaker 2 (53:32):
It got far enough that Palmer accidentally cut himself and
had to get metical care.

Speaker 1 (53:37):
So what you guys built that is.

Speaker 2 (53:39):
That's a question for Palmer. I don't. I don't understand
the exact physics of what happened, but needless to say,
we are no longer building it. By the way, that
would have been to use for breaching, so instead of
putting C four on doors, using a plasma cutter essentially
to get into denied environments.

Speaker 1 (53:55):
Oh man, so not like actually no, no, no, not hand
to hand combat, just ensuring. Okay, And uh.

Speaker 2 (54:03):
So let's see what's the question.

Speaker 1 (54:05):
I know it's pretty far. It's pretty hard to come
back from that.

Speaker 2 (54:07):
Yeah, we've just like gone down this path. Should we
go for augmented reality? And so this has been a
dream for soldiers for a really long time, and I
think it would be incredible, Like there are all sorts
of like ethical goods that can come out of this. Primarily,
I think most people would be shocked by how many
casualties that are caused in theater are friendly fire related,

(54:29):
Like it's just deconflicting. All of the stuff that's going
on is really hard and will only become harder as
there are more autonomous assets deployed. And uh, obviously our
team is the best in the world for building heads
up displays for defense purposes. There's I don't think there's
anyone in the world that has more talent than we

(54:50):
do around this specific topic.

Speaker 1 (54:51):
I mean, if anyone's going to build out something like this,
it could be the Oculous team, right that has built Oculus.

Speaker 2 (54:57):
Yeah, No, I mean there's there is no better team
for doing that. And our decision was that it's just
not there yet. The science is not there yet to
make it credible. In deployment, there's a d D contract
called IVAS, IVAS and Microsoft on the contract to deploy
hollow lens, but it's like super test and prototype driven obviously,

(55:17):
Like it's hard to imagine a soldier walking into combat
with like a you know, really heavy, blocky HoloLens hanging
over their head with a really tiny field of view,
and so I think that there's some like test and
evaluation that's happening to figure out what the future might
look like. But you know, we'll be there and ready
when the science is ready to make it possible. And

(55:38):
the software that we're building this lattice platform is at
the end of the day, it is the back end
for feeding all of that data to the operator in
the field, whether it's on their mobile device, you know,
over radio communication or eventually in an augmented reality headset.

Speaker 1 (55:53):
And have you had any qualms of working with the
current administration. I mean, I know you are part of
the transition team for Trump and helping with Defense and
giving recommendations on what they should do. Have you had
any hesitations? And I know that you guys aren't political.
You say you're not political, But do you take any
of that into account to an extent, I mean beyond
the fact that, yes, Silicon Valley is notoriously liberal. But

(56:16):
I mean even for you personally, now that we're in
an enclosed room and you know we have time so
you can't really get out, I mean, do you think
about any of this stuff personally, like what you're building,
who you're building it for, where it's going to go,
where it's going to end up.

Speaker 2 (56:32):
Talk to me, Are you implying that the door is
locked and I'm not allowed to leave?

Speaker 1 (56:36):
Did I do that? I don't think, honestly, I don't
think it's locked.

Speaker 2 (56:41):
The reality is, like I work at the intersection of tech, finance,
and government, which are like the three most hated industries
in America. So I'm not shying away from conflict or controversy.
The way to state our engagement with the government is
that we're working with the United States government. Administrations come
and go some of them than others. And certainly they

(57:02):
have influence in things like budget policy, execution, presidential authorizations
like there are things that presidential administrations are responsible for,
but by and large, the enormous population of civil servants
are the ones that are responsible for executing the mission
that they've been given to execute. And we are not

(57:23):
going to withhold technology from the people that are responsible
for this mission because of who's in the White House.

Speaker 1 (57:29):
But you are talking about a White House that has
come under controversy for an immigration ban, and these are
technologies that you're building that directly could impact in that
kind of way. So I mean, does that ever come
into account for you? Does that ever ply into how
you feel about it?

Speaker 2 (57:46):
I mean, I think this is again the beauty of
the democratic process, is that if there were some sort
of policy that we had a strong disagreement with internally,
unanimously or at least unanimous across the leadership of the company,
would we push back can say like we're shutting the
business down and not selling to the government because you know,
we're afraid of where it's going. I can't say that

(58:07):
it would be impossible that that would happen because you know,
obviously crazy things have happened to nation states and distress
over the history of the world. I don't think we're there.
I don't think we're like on the prespice of some
sort of revolution that's going to lead to some terrifying
world war of significant ethical proportion. So in that regard,

(58:27):
like the decision really comes about, you know, do you
trust the democratic process? Do you trust that if a
policy oversteps that there are ways that that policy can
be corrected through the legislative process to the judicial process.
And right now, I think the answer is unequivocally yes.
We have a democratic controlled House of Representatives. They have
the ability to legislate. The judicial branch is you know,

(58:49):
functioning in America, where like the constitutionality of the policies
that are laid out by the executive and legislative branch
is constantly being evaluated. And as long as that's the case,
I think I think, you know, it is the right, moral,
ethical thing to do to continue working with the government
of the United States of America.

Speaker 1 (59:07):
I didn't mean to almost are laughing, but my follow
up was going to be, would you deploy lightsabers to
this White House?

Speaker 2 (59:14):
Wow? I mean we could talk about the like ideology
of the last three Star Wars movies, because I think
Luke Skywalker in the last Jedi would certainly say that
the answer should be no, because Jedi can't be trusted
and they should be eradicated from the Earth because it's
too much power. But I don't think we're talking about that.
And I also think JJ Abrams clearly disagrees and made

(59:37):
that very clear and Rise of Skywalker, which as a
Star Wars fan, I really appreciate it.

Speaker 1 (59:41):
Okay, I want to know, Sorry, I totally derailed that.
What is it like being involved in a business that
a lot of folks in Silicon Valley kind of disagree with?
Because you're looking at I think it was twenty eighteen,
you have Amazon employees sending a letter to Jeff Bezos saying,
you know, stop selling facial recognition and services to the
US government. You had I think four thousand Google employees

(01:00:04):
for Project Maven protesting this contract work with the government.
This is an issue that's very sensitive of tech companies
doing business with the government, especially this current administration. Do
you feel that at andrel that it's not a popular
thing you're doing.

Speaker 2 (01:00:20):
Two answers to the question. First is external to the company,
and the second one I think is internal to the company.
External the company first and foremost, it is incredibly important
that I am available, accessible and having serious dialogue and
conversation around what I believe are some of the world's
most important questions around the ethics of defense technology. So,

(01:00:41):
and this goes for all of the co founders of
the company and Palmer included. And that's basically like, are
we going to be dismissive of people's concerns. No, we
should not be dismissive of people's concerns. Should we be
involved in the dialogue around how these technologies are built,
how the regulations are formed. Absolutely, we should be involved
in that. And I think that's kind of key to

(01:01:03):
you know, having this conversation with you, Lori, and to
hosting dinners at my house and going down to Washington, DC,
which I'm doing later this afternoon to engage in conversations
with policymakers. It is really important to think through all
of the implications of these decisions and make the right
ethical decisions to the extent possible. In every case, internal
to the company, I think is slightly different. So you know,

(01:01:27):
you could see Brad Smith at Microsoft, Jeff Bezos at
Amazon came out and took strong stances of support for
the US government, for the Department of Defense. Google kind
of went in a different direction, and I think has
started backtracking a little bit on those decisions based on
just a realization of the control of the business that
the activist had internally. But at ANROL, you know, we're

(01:01:47):
in a unique position because unlike Google, unlike Amazon, unlike Microsoft,
our employees signed up to work on defense technology. And
so when you read those letters that were written at
other tech companies, the key point of contention was that
they didn't quote sign up to work on defense. Well,
at ANDREL they quote signed up for defense. They knew

(01:02:08):
that from day one, and so regardless of political persuasion,
you know, whether they're socialists or libertarians, or Big R
Republicans or Little R, it doesn't matter. Everyone at the
company is bought into the mission. And so when we
have these discussions, we have ethical discussions, we have mission discussions,
but it doesn't come down to the nature of the

(01:02:29):
company or the vision for the future that we have.

Speaker 1 (01:02:31):
May I make a suggestion you may okay, it'd be
interesting because you guys are having all these ethical conversations
about the future. Having covered Silicon Valley all these years,
it's a lot of dudes, right, And I don't know
about these dinners, and I don't know about you know
what it was like at Palmer when you guys are
talking about the future. But whether it's women or diversity
of mindset, you know, that's like a killer app right,

(01:02:53):
It's like thinking about empathetic mediums, thinking about the human
element and ways that I don't think a bunch of
the same type folks will be able to think about.
So I don't know what the breakdown is and I
don't want to judge it of Andral, but I hope
you guys have a diversity of thought in the room
when you're thinking about these types of things, just because
I do think you guys are very you have a
big play at the future. So do you guys have

(01:03:14):
diversity of thought in the room?

Speaker 2 (01:03:16):
Well, I mean diversity of thought absolutely. Other forms of
diversity is.

Speaker 1 (01:03:20):
In the form diversity of thought that translates into the
diversity forms like skin color, you know all.

Speaker 2 (01:03:25):
Of Yeah, you know, this is something that we work
really hard at every day.

Speaker 1 (01:03:30):
But don't give me the Silicon Valley line on it, like,
you know what, but I know everyone in Silicon Valley
is working really hard to change it blah blah blah,
and it hasn't really changed.

Speaker 2 (01:03:37):
I agree. And we do have people that are critical
to our mission internally that do not look like standard
Silicon Valley people of whether it's skin color or gender
or whatever it might be. But of course not enough,
and that's something that we are working hard at. Which
is I know, as you said the Silicon Valley answer.
The other thing that I would I would say here
is that oftentimes people say that in the Defense commune

(01:04:01):
it's like it's more unbalanced than in other industries. I
think that is partly true, but there are some really
really credible minority populations that already exists within the Defense
that I think is as always very important for us
to engage with. You know, under Secretary Ellen Lord is
the head of Acquisitions and Sustainment at the Pentagon. She's

(01:04:24):
you know, one of the top four people in the department,
and we engage with her frequently. There are startup founders
like Rachel Olney at Geocite and Danielle Perdomo at Gotena
that are active in the defense community. There are academics
like Renee Drista at Stanford are that are really strong
in the journalism space. Some of the strongest defense reporters

(01:04:46):
are are female. I think Lara Seeligman at Foreign Policy
is incredibly strong in defense, knows what she's talking about.
Morgan Brennan at CNBC is really strong. And so you
know that this like rejection or some sort of suggestion
that that doesn't exist, I think is just false and
that should be tapped into. For sure.

Speaker 1 (01:05:04):
I'd be curious to know when we look at Russia
and China, As someone who's been deeply involved and looks
at these types of threats, what's like the scariest thing
that Russia and China are doing that you're a little
bit concerned that in the United States we're not going
to be able to keep up with or we have
to keep an eye out on.

Speaker 2 (01:05:23):
By the very nature of the conversation, the scariest things
are the ones that we don't know about. That's the
first thing that I would say. The second is that
China has an actual strategy around exporting ideology and technology
in tandem with one another. And so you know, if
you look at the deployment of or the export of
surveillance technologies globally. As I mentioned before, you have these

(01:05:45):
companies like Hick Vision and Dawa that are camera companies
Chinese camera companies. You have ie Fly Tech Dji that
are drone companies. You have AI companies like meg v
and since Time is the most valuable AI company in
the entire world, that's interesting to to note. And then
you have all of the networking and communications infrastructure behind Huawei.
And when China is going out to countries that you

(01:06:07):
know it's engaging with for Belton Road or engaging with
for loan programs, they're coming in and they're saying Venezuela, Ecuador, Egypt, Iran,
so on and so forth. Here is not only the
technology but also the ideology that wraps up the surveillance
state that we've built in China and that we've used
to systematically oppress entire people groups using the technology that's

(01:06:29):
being built by our tech community. That type of stuff
is really scary to me, not only because you know
it's being exported at a rate that is really unbelievable
in the last five to ten years, but also because
this is not the engagement. In the US, our defense
Department does not work closely with the tech community, and
so post Cold War, our best and brightest engineers kind

(01:06:52):
of stopped going to the DoD directly or to the
defense primes like Lockheed raytheon General Dynamics and so forth,
and they started going in optimizing ads at Google and Facebook.
You know, that's where the bulk of computer science talent
is in the United States, and so our exports. If
you look at the exports of United States technology internationally,

(01:07:13):
particularly related to the military, like FORGM military financing, FORIGN
military sales facilitated by the Department of State, it's like
F thirty fives, it's Amram missiles. It's all this really
high end military equipment that's built by primes, and it's
not the like low level stuff that actually runs the
countries that we're sending it to. And that's really concerning

(01:07:37):
to me.

Speaker 1 (01:07:38):
Yeah, I saw you wrote in an op at like
Putin in September twenty seventeen said that AI leadership was
a means to become the ruler of the world. China
has vowed to achieve AI dominance by twenty thirty. Meanwhile,
much of the top AI talent, and the United States
is working on things such as AD optimization, so that
definitely seems like it is something you think a lot about.

Speaker 2 (01:07:56):
Yeah. Absolutely, you know, some of these countries, for what
it's worth, China, Russia, they actually have the ability to
conscript their top talent into working on national priorities. I'm
not suggesting, by the way, that we should conscript a
talent in the United States. I'm just stating that it
is an obvious advantage that they have, which means that
we need to figure out ways to appeal to the

(01:08:18):
talent that we need domestically to get them to work
on those national priorities.

Speaker 1 (01:08:22):
But how do you do that? Because there's been so
much tension between Silicon Valley and the government for a while,
and the government certainly has a reputation for moving slow
and not really getting things done, and this current administration
is in exactly a place that a lot of liberal
leaning Silicon Valley wants to be a part of. So
it certainly creates a conundrum of sorts.

Speaker 2 (01:08:43):
You know, I think there are a lot of people
in the tech community that not only would be open
to working on national priorities in defense, but would be
excited to work in national priorities for defense. They just
need a place to do it, and we have a
model for this. If you look back over the last
thirty years since the end of the Cold War, there
are two unfortunately only two, but there are two multi

(01:09:04):
billion dollar venture backed companies that have done the majority
of the work with the government, Palenteer and SpaceX. Palenteer
and SpaceX have been able to recruit the people they
needed to execute their mission. Like they SpaceX has a
collection of the best rocket scientists in the world working
on the Falcon platforms. You know. Palenteer has access to
the best you know, data engineers in the world to

(01:09:27):
build Palnter Gotham and foundry products androll. You know, recruiting
hasn't been the problem. It hasn't been one of the
core problems. Like we're able to recruit and retain top talent.
The problem is in the government working with those companies.
And so if you look at Palanteer and SpaceX, it
took them such a long time to actually crack into

(01:09:48):
the government industrial base that they literally needed to have
billionaire co founders, like they needed to have a billionaire
working at the company to make sure that the business
was able to be financed through meaning contract volume. You know,
this is something that we realized at the onset when
we first started andrel You know, you can debate whether
or not Palmer's a billionaire, but I think having the

(01:10:09):
ability to raise capital at good terms, at reasonable terms
for capital intensive business as well as potentially finance the
business through the slowness of government is incredibly important. And
right now because of this ecosystem that's been created around
the defense primes in the military, you really have to

(01:10:30):
have this kind of Howard Hughes style of entrepreneurship where
like only the the you know, ragtag bunch of billionaires
are actually able to build a business, and that needs
to be fixed. We need to get to the point
where the government is willing and able to deploy meaningful
contracts to companies that are working on things that are
important to them rather than just like writing a bunch
of little two hundred thousand dollars grants, which is their

(01:10:52):
strategy today.

Speaker 1 (01:10:53):
Can you give us like a very visual like I
want this very specific if you don't mind, Like what
does the fewture of warfare look like? I mean, you're
in the center of it. You can throw in some
of your technologies you're building, some of the ones coming
down the pipeline that you're not talking about yet, you
can throw in those. Look, what does the future of
warfare look like? How do we battle this out? How
do we defend ourselves? What are soldiers using? Just like

(01:11:15):
take us there.

Speaker 2 (01:11:16):
We've talked about some of these already. I think some
of it is real time situational awareness of the entire battlefield,
so knowing everything that's going on in an environment.

Speaker 1 (01:11:25):
So these augmented spaces.

Speaker 2 (01:11:28):
Yeah, augmented for soldiers on the ground to the extent
that we require soldiers on the ground, though I expect
that that number will go down significantly. But if any
of the listeners have read the book or seen the
movie enders game, you kind of know this concept of
basically putting yourself in virtual reality and then kind of
having a top down view of a battlespace and be

(01:11:49):
able to manipulate the assets that exist in that space.
That type of interaction is very likely going to be
more and more common. The Air Force is starting to
play around with this in a program that they're calling
the Advanced Battle Management System, and I think that over
the next ten years that's going to become super commonplace.
So there's that I think, you know, as far as engagement,

(01:12:10):
like kinetic engagement with the enemy will be much more
driven by autonomous systems, whether they're remotely controlled or fully autonomous,
so that we're not putting humans in harm's way where
they're not required. Another great example of this is there's
a venture backed company called shield Ai that has built
a small drone amendarial system that can be tossed into

(01:12:32):
a building and it will do a survey of the
building and allow the operators, the special forces guys whatever
that are about to kick down the door to know
what's behind the door. This type of information collection that
will save people's lives, I think will become more and
more pervasive across all of the different battlefields. But really
like the summarization of all of these categories is that

(01:12:54):
I think, if we can stay ahead, the future of
warfare is no warfare. And that is the intent is
that you get to a where your information dominance, your
battlefield dominance, your weapons platform dominance are all so so
real and so large that the gap is insurmountable and
the enemy won't want to engage in combat because they

(01:13:15):
know that they'll lose. And you know, there's all sorts
of like crazy science fiction versions of this. One of
my favorite science fiction authors is this guy named Vern Revenge,
and he has a series called the Peace War Series,
and in it he talks about this like force field
that he calls a bobble and the inside the bobble
time is frozen and it's impenetrable. And so if you

(01:13:36):
had two enemies that were fighting one another, you just
bobble one of them and then you go to the
other one and say, if you don't stop, we're going
to babble you. And then you bobble them, and you
unbobble the other one, and you say, I'm gonna say
the same thing too. If you don't stop, we're going
to babble you for ten thousand years. And then you
unbobble them, and you say, like, it's your choice. You
can engage in combat or you can be bobbled for

(01:13:56):
ten thousand years. And basically the moral iss orry is
that conflict basically just stops because people realize that the
cost of engaging in conflict is way too high. And
I'm not saying that we're like building a bibble or
that we're anywhere near building a bobble. But I think
every piece of technology that you build that continues to
build upon your advantage gives us the ability to control

(01:14:17):
to some extent the amount of conflict that's happening globally.

Speaker 1 (01:14:20):
Could the future of war be fought solely by artificial intelligence?

Speaker 2 (01:14:24):
I mean, I don't think so. This conversation is so
far out there that it's always so hard to engage
in like a credible way. You know, there's movies like
Wargame where the computer decides that the fate of humanity
will be better off of it, just like nukes everything
to oblivion. You know. I think that humans are responsible

(01:14:45):
for making human ethical decisions and it will be that
way for a very very long time. And to the
extent that computers are responsible for making decisions, we should
be working on ways to counter that threat, to prevent
that from becoming the way kind of like the standard
for how conflict is managed.

Speaker 1 (01:15:06):
I only push back on that to say, well, it's
isn't it y'all's jobs to think kind of far into
the future, Because if the problem in technology is that
it seems as though some of the folks in Silicon Valley,
haven't thought far enough into the future, and we see
all these human problems that have arisen.

Speaker 2 (01:15:19):
Yeah, I would say that there's some of this that
is academic and true, like we should be having academic
conversations about these things, but this is not how conflict
it has been managed over the course of history. Like
we didn't have a detailed discussion about the atom bomb
and come up with like ethical frameworks for how we
think about it, and then like only then after we've

(01:15:41):
like perfected the ethical framework, decide to build it. Same
thing with like Kimbio, same thing with precision guided weapons,
same thing with Cyberg. Like these things get litigated, regulatorily
litigated by the people that hold the technology. If we
sit back and just say, like we're going to spend
the next thirty years like having a bunch of like
fireside chats at conferences with a bunch of academics about

(01:16:05):
what each of these defense technologies could mean, and we
don't build it. Guess who's going to build it? All
of the other countries that are not having that academic dialogue,
and we'll be sitting on our hands when they have
a critical national security advantage against US. That puts our
own lives at risk. That seems like a really bad
trade off.

Speaker 1 (01:16:25):
Speaking of all of the other countries, who will you
not do business with?

Speaker 2 (01:16:29):
Yeah, I mean this is it's a great question, and
you know it. It's case dependent, it's process dependent, it's
governmental system dependent. Certainly, we'll work with the close allies,
so the five by community, Australia, UK, Canada, New Zealand.
There are no questions about our close allies, but we
have to have rigorous conversation about really anyone else. So

(01:16:52):
no China, No China. Although I think the only reason
China would want our technology at this point is to
steal the IP and develop it for their own I
don't think that they would actually be interested in being
like a paying customer.

Speaker 1 (01:17:06):
Have you turned anyone down?

Speaker 2 (01:17:08):
Uh? You know, we have so much inbound interest right
now in what we're building that you know, we turned
down ninety nine percent of what comes into our into
our funnel.

Speaker 1 (01:17:19):
But have you turned anyone down for those types of
reasons like a make an ethical reason? Yea an ethical reason?

Speaker 2 (01:17:25):
Uh. We have decided not to follow up with people
because we thought that the use case violated some ethical
principle like who I wouldn't I wouldn't want to like
start throwing people under the bus. Who I never even
responded to an email for that. That seems coldhearted and unnecessary. Fair.

Speaker 1 (01:17:42):
Lately, I've become obsessed with this idea of spies and
this idea that you know, there's so much valuable IP
in Silicon Valley. You guys are in Silicon Valley based,
but I'm assuming you know you guys are a valuable company.
I'm assuming you guys do background checks and all sorts
of stuff. But do you ever think about the employees
you guys hire, or even in Silicon Valley as a
whole ale? Worried about people kind of infiltrating these companies.

(01:18:03):
I know it's kind of it's not necessarily androal focus,
but just in general, having your experience in the government
and now in Silicon value you are a palenteer. Do
you ever worry about Nation States kind of infiltrating these
companies for valuable data?

Speaker 2 (01:18:16):
Of course? Yeah, I mean I think our adversaries, particularly China,
has made no secret of its interest in disrupting our
defense industrial base and stealing intellectual property to the extent possible.
The similarities to the F thirty five of their fifth
generation fighter are striking like it seems to me like

(01:18:37):
they're actually doing a pretty good job at sealing IP
when when they want it. This is a huge concern.
I mean, if you look at like the impact of
the tariffs that have been recently implemented, compare that to
the cost to the American economy for IP theft from China,
and it's like there's not even a comparison, Like they're
just just like ripping so much and so yeah, of course,
like it would be crazy to not assume that they're

(01:19:00):
trying to get at the personal data of the people
that are working on these top priorities, as well as
information proprietary to the companies that are working on these priorities.

Speaker 1 (01:19:09):
Have you had discussions about it at andreil Oh?

Speaker 2 (01:19:12):
Sure. I mean, information security is a critical piece of
the pie for everyone working in the defense industrial base.
We have a crack team of infosec professionals that spend
their entire day thinking about how to lock down the
edges of the network, how to think through insider threat.
You know that it's a core competency that I.

Speaker 1 (01:19:32):
Think you have to have, and I know that for
the border control tech. You guys actually went out there.
I don't know if you went out there, but I
know Palmer went out there and actually like was looking
at this technology, deploying this technology, playing around with that.
You know, you're out on the border. I'm sure there
are certain dangers. Have you ever worried about your own safety?

Speaker 2 (01:19:50):
I wouldn't say that I've worried about my like physical safety.
I've definitely spent a lot of time thinking about my
digital footprint and making sure that I'm not presenting myself
or my family to undoe risk. And so there's always
this like kind of digital hygiene exercise that you can
go through to try to protect against.

Speaker 1 (01:20:10):
That last question, why do you do this?

Speaker 2 (01:20:15):
It's a sense of duty, to be honest. I mean,
it would be crazy to make it about some sort
of like sacrifice because I think, you know, being an
entrepreneur is a ton of fun, and I think if
we're successful, I think there's financial reward for our employees,
for our investors, certainly for the founding team. So you know,

(01:20:37):
I'm not going to act like a martyr. But at
the same time, like these are really hard problems and
we're not building an app to share one hundred and
forty characters in a slightly better way with our friends.
Like this isn't popular, and I think we have to
unify around the idea that it's really important. And going
back to that September eleventh, two thousand and one, sitting

(01:20:57):
in my principal's office, I knew it that much moment
that this was going to be the career that I
was going to work on. I didn't know that this
is what I would be doing specifically, but I can't imagine,
you know, going to work every day and not thinking
about how I can be helpful to our national security,
to the priorities that are set forward and the values

(01:21:17):
that our national security stands for.

Speaker 1 (01:21:19):
Why did you know that that that was going to
be what you're going to do the rest of your life.

Speaker 2 (01:21:24):
I think, you know, part of the lie that's being
told to the world, particularly by the kind of modern culture,
whether it's like millennial gen Z, whatever, is that there's
absolute equivalents in like all things like morality, culture or whatever.
And I think events like nine to eleven kind of

(01:21:47):
stuck with me as this realization that there's a real
world out there and like we can't just hide in
this little bubble of comfort and say, like, actually everyone's
the same, everyone believes the same things, who values the
same things? Because I just fundamentally think that there's something
about the democracy, there's something about capitalism, there's something about

(01:22:11):
the freedom that were that were afforded that's worth defending.
And without that, you end up living in these authoritarian,
oppressive societies where none of those values can be exercised.
And you know, I don't really care if I can
open Twitter on my phone and you know, gap at

(01:22:32):
people about the political issue du jour, But I do
care a lot about my ability to exercise freedom of speech,
and that's something that is not protected in many places
in the world.

Speaker 1 (01:22:49):
So this episode had a lot to take in. I'm
guessing you guys might have some thoughts. What do you
think of Andrew and the technology they're building? Where should
we draw the line? Text me on my new community
number nine one seven five four zero three four one zero.
It goes directly to my phone. I promise I'm not
just saying that. And here's a personal request. If you

(01:23:13):
like the show, I want to hear from you, leave
us a review on the Apple podcast app or wherever
you listen and don't forget to subscribe. See don't miss
an episode. It helps us out a lot and follow me.
I'm at Lori Siegel on Twitter and Instagram and the
show is at First Contact Podcast on Instagram, on Twitter,
We're at First Contact Pod. First Contact is a production
of Dot dot Dot Media. Executive produced by Lori Siegel

(01:23:35):
and Derek Dodge. This episode was produced and edited by
Sabine Jansen and Jack Reagan. Original theme music by Xander Singh.
First Contact with Lori Siegel is a production of Dot
dot Dot Media and iHeartRadio. First Contact with Lori Siegel
is a production of Dot dot Dot Media and iHeartRadio.
Advertise With Us

Popular Podcasts

Dateline NBC

Dateline NBC

Current and classic episodes, featuring compelling true-crime mysteries, powerful documentaries and in-depth investigations. Follow now to get the latest episodes of Dateline NBC completely free, or subscribe to Dateline Premium for ad-free listening and exclusive bonus content: DatelinePremium.com

Stuff You Should Know

Stuff You Should Know

If you've ever wanted to know about champagne, satanism, the Stonewall Uprising, chaos theory, LSD, El Nino, true crime and Rosa Parks, then look no further. Josh and Chuck have you covered.

The Clifford Show

The Clifford Show

The Clifford Show with Clifford Taylor IV blends humor, culture, and behind-the-scenes sports talk with real conversations featuring athletes, creators, and personalities—spotlighting the grind, the growth, and the opportunities shaping the next generation of sports and culture.

Music, radio and podcasts, all free. Listen online or download the iHeart App.

Connect

© 2026 iHeartMedia, Inc.

  • Help
  • Privacy Policy
  • Terms of Use
  • AdChoicesAd Choices