All Episodes

June 25, 2025 41 mins

Big Tech promised AI would solve our biggest problems. But behind the hype there is a more unsettling reality: labor exploitation, environmental harm, and the looming threat of mass automation.

Dexter sits down with journalist Karen Hao to talk about her new book Empire of AI: Dreams and Nightmares in Sam Altman's OpenAI. They dig into how today’s AI companies are operating less like tech innovators and more like empires.

Make sure to check out Karen’s book: Empire of AI

Got something you’re curious about? Hit us up killswitch@kaleidoscope.nyc, or @dexdigi on IG or Bluesky.

See omnystudio.com/listener for privacy information.

Mark as Played
Transcript

Episode Transcript

Available transcripts are automatically generated. Complete accuracy is not guaranteed.
Speaker 1 (00:11):
Empire of Ai. You're using the word empire here. Yeah,
Can you tell me a little bit more about that decision?

Speaker 2 (00:16):
Yeah, So the term empire in the title is specifically
an argument that I make in the book that we
really need to start thinking of companies like open Ai
as new forms of empire.

Speaker 1 (00:29):
Karen Howe is a journalist and the author of a
new book called Empire of Ai Dreams and Nightmares and
Sam Altman's Open Ai, which just came out last month.
In case you're not familiar with her work, she herself
is a data engineer, and she's also the first person
to ever really cover open Ai, the company that makes
chat gpt. On the surface, the book is a story

(00:51):
of the rise of open Ai and the story of
its founder, Sam Altman, and if you like drama, let
me tell you there's a lot of it. Someone could
definitely make a Netflix series about this. And they are
also comparisons to be made between Sam Altman and Steve Jobs,
and we will get to that. But before we do anything,
we should talk about a word that a lot of

(01:12):
the reviews I've seen are kind of glossing over, which
is weird because it's the first word in the title empire, and.

Speaker 2 (01:20):
The reason is four different features that empires of AI
share with empires of old. The first one is that
they lay claim to resources that are not their own,
but they reinterpret the rules to suggest that those resources
were always their own. That refers to how these companies
just scrape the data on the Internet, same with the
intellectual property that these companies just take, like the work

(01:41):
of artists, the work of writers, the work of creators.
The second feature of empires is that they engage in
a lot of labor exploitation, and that refers not just
to the fact that these companies they contract a lot
of workers, often in the Global South or other economically
vulnerable communities in the Global North, and pay them extraordinarily
small amounts of money to do extremely exploitative work data annotation,

(02:04):
data cleaning, data preparation, content moderation, where workers are left
with the same level of trauma that social media content
moderators were left with. It also refers to the fact
that these companies are inherently building labor automating technologies. So
opening eyes definition of artificial general intelligence is highly autonomous

(02:25):
systems that outperform humans at most economically valuable work and
so they're explicitly saying we're going to automate away the
things that people usually get paid for, and that in
and of itself is a labor exploitation. So there's labor
exploitation happening on the way in and happening on the
way out.

Speaker 1 (02:44):
Karen is not kidding. In the book, she travels all
over the world and speaks to people who work in
all levels of the AI industry. And what you've found
was that countries that were undergoing some kind of economic
crisis were being targeted by the industry as places to
hire workers who train AI models to understand what kind
of stuff to allow and what to block. This is

(03:07):
work that only a human can do. So who's this
empire for? And where do you and I fit in here?
I'm afraid Kaleidoscope and iHeart podcasts. This is kill switch.

(03:32):
I'm doctor Thomas.

Speaker 3 (03:34):
I'm sorry, I'm goodbye.

Speaker 1 (04:03):
Getting back to Karen's definitions of colonial empires, the first
is that they change the rules so that when they
find something like oil in the ground or art and
literature on the internet, they can say that it belongs
to them. The second is that they exploit people for
their labor. The third is that they control what people
can know about them. In the case of the AI industry,

(04:24):
this can look like companies just finding the top researchers
who are working at universities, paying them lots of money
to drop their university job and come to the company,
and then censoring anything that those researchers write that is
critical of their technology or of how they treat the environment. Again,
controlling what the public knows about them. But I think

(04:46):
it's the fourth element where things start to really get interesting.

Speaker 2 (04:49):
And then the last future is that empires always have
this narrative that there are good empires and their evil empires.
So they the good empire need to be empire in
the first place and engage in all this resource extraction
and labor exploitation because that is what will make them
strong enough to beat back the evil empire. So back
in the day and during European colonialism, the British Empire

(05:11):
would say that they were better than the Dutch Empire.
The French Empire would say they were better than the
British Empire. And part of being the good empire is
that what they were ultimately doing was civilizing the world.
They were bringing progress and modernity to everyone, and they
were giving humanity the chance to enter heaven instead of hell.

(05:33):
And this is literally the language that AI people use
these days. They talk about heaven and hell. They talk
about recreating God and about bringing the next era of
civilization into being.

Speaker 1 (05:51):
Yeah, and I mean Sam Altman twenty twenty three going
before Congress and saying, look, if we don't put a
whole bunch of resources in AA, guess who will China?
Would you like China to win? I don't think you do.
I know that you're scared of China. Let's play nice
here so that you know, play nice specifically with our industry.
Also play nice with my company so that we can

(06:13):
come out on top, which again is very similar to
how an empire justifies itself, which is, if we don't
do this, the bad guys will, such as you lay
on the book open AI really starts out by saying, listen,
AI is going to happen. AJI is going to happen.
It needs to be the good guys. We're the good guys.

(06:36):
And I think that's something that you do in the
book is I think we've gotten really used to this
idea that everything was inevitable this all was going to happen,
and the stuff that happens tomorrow it was always going
to happen, And the stuff that happens three years from now,
it was always going to happen. But you put it
together that none of this is actually inevitable, and not

(06:56):
only that, it's decisions are being made, and actually a
lot of these decisions are being made by a pretty
small handful of people in Silicon Valley.

Speaker 2 (07:06):
Absolutely, this is definitely one of the core messages that
I hope people can take away from the book is
technology is very much a product of human choices, and
it just so happens that AI has been the product
of a very small handful of people's choices who have
a very what I would argue, narrow world view, a
narrow view of how the world works now and how

(07:28):
it should continue to work. And if I had to
point to one decision that was not inevitable, but that
Opening Eye sort of ushered in with the shape of
AI today, it is the fact that they decided to
scale this technology aggressively, and that happened because open Ai
early on identified that they had to be number one

(07:51):
based on this. There's the evil Empire out there, and
we the Good Empire, have to race aggressively. And they
realized that the fact fastest and easiest way to get
to number one was by taking existing techniques within the
AI research field and blowing it up with an unprecedented
amount of data and an unprecedented amount of computational resources.

(08:14):
So they realized, if we can build the largest supercomputers
in the world, we will have the best chance at
dominating this technology. All of a sudden, all of the
tech companies saw what opening I did and when we
want to do that too, And suddenly you see a
step change in the sheer amount of resources that are

(08:34):
now going into developing this technology, and it very much
becomes a scale at all costs paradigm for AI development.
This very specific scaling decision is what leads to significant
amounts of labor exploitation and is what leads to significant
amounts of environmental harm, because that is when you start

(08:55):
talking about covering the earth with data centers and supercomputers,
leading to climate harm, healthcarms, the exacerbation of the fresh
water crisis because these data centers need to be cooled
with fresh water, and also in order to meet the
data imperative for the size of these models. That is,
when they start training on polluted data sets. That was

(09:17):
not a norm at all in AI research. In fact,
before Opening I started training on polluted data sets, the
norm was actually shifting towards really small, extremely curated, and
clean data sets. A lot of research was coming out
at the time pre the CHATGBT moment where people realize
you could actually get away with very tiny data sets

(09:41):
if you prepared it correctly. But then Opening I shifted
to let's use huge data sets, poor quality data sets,
and that's why you end up having the need for
content moderation, because when you're pumping a bunch of gunk
into the models, then a bunch of gun comes out,
and then you have to create content moderation filters to

(10:03):
strip that gunk out before it reaches the user. And
that involves humans, and that involves humans who then are
left with a significant amount of trauma.

Speaker 1 (10:13):
There is a reason that we're talking about Sam Altman
and open ai so much. Straight up. It's the same
reason that people say chat GBT as a generic term
for AI in general. Open Ai was the first company
to break through and everyone copy them, and so the
company culture at open Ai has influenced how other companies act,
even Google and Microsoft, and then in turn, that's influenced

(10:36):
how the public thinks about AI. And really a lot
of our conversation about AI isn't about computer science. It's
about culture, or maybe subculture, and that subculture is very
interested in a fight between the good and evil use
of AI. And it has this question that keeps coming

(10:56):
up in the book what's your P? Doom? So I
And before somebody listens to this and thinks, dexter, what
did you just ask this person? P parentheses in the
word doom.

Speaker 2 (11:08):
So I'm not a great fan of this phrase. It
refers to probability of doom, and that is a shorthand
within a particular community, ideological community that believes that AI
has the potential to destroy humanity, and so probability of
doom means the probability that humanity will be destroyed by

(11:30):
artificial intelligence. The reason why I'm not a fan of
it is because this ideology is predicated on the idea
that we can fundamentally recreate human intelligence in computers, which
is something that is still heavily scientifically debated. And this
community also believes that once that happens, AI systems will
develop their own consciousness or self motivation or self preservation,

(11:55):
whatever it is that then makes them quote unquote go
rogue and be unable to be controlled by the humans
that originally program them. And that's what will lead to
potential disaster of them just killing everyone or consuming all
the resources such that most people have horrible lives, or
keeping us as pets. You know, these are all scenarios,
but the community talks about.

Speaker 1 (12:16):
When you say this is what the community talks about.
There are people who like ask other people, like at
a party or something, Yeah, so what's your P doom
in somebody else? Yes, Oh mine's thirty five, And oh yeah,
you know what I'm at about a seventy five. Recently,
my P doom is seventy five.

Speaker 2 (12:30):
This happens, Yes, this happens. In addition to me fundamentally
disagreeing with the ideological foundations of probability of doom, I
also disagree with the veneer of rigor that a phrase
like p doom ascribes to something that is inherently unrigorous.
So they're putting mathematical values to something that is inherently illogical.

(12:54):
There is no scientific evidence that we can point to
that AI will go rogue, that it will do any
of these things. It really is based on a belief,
and over the last few years, especially within Silicon Valley,
there's been the cultivation of what I call quasi religious
movements around what AI is, what it will be, and

(13:14):
ultimately how it will impact people. And we just talked
about what most people call the Doomer ideology, and then
there's the boomer ideology, which also believes that it's fundamentally
possible to recreate human intelligence in computers, but for them
this will be a positive civilizational transformation rather than a
negative one. And both of them do not actually have

(13:35):
scientific evidence one where or the other. They are talking
about theoretical scenarios in projecting into the future based on
their own conceptions of what human intelligence is and the
idea that human intelligence is fundamentally computable, and they do
not actually observe the real world harms or benefits of
this technology as part of these philosophies, these ideologies.

Speaker 1 (13:59):
Okay, clarify here when people talk about boomers and doomers
in the world of AI. Again, a boomer is someone
who is really optimistic about AI. For example, believing that
once we get artificial general intelligence, or basically an AI
that's overall smarter than humans, that that will lead us
to a utopia, prosperity, solving disease, fixing the climate, having

(14:22):
a colony on the moon, food for everybody, all that
sort of thing. A doomer is pessimistic about it. They
think that AI will get smart and then suddenly go
full sky net and kill everybody. So usually when we
hear that there are two sides to an argument, we figure, Okay,
the truth is probably somewhere in the middle there. But
what if both sides are wrong? What if we're arguing

(14:46):
about a question that doesn't even make sense? And this
is one of the really interesting things I think about
the process of reading the book. Right, it starts off
with us as an inner office drama. There's some backstabbing here.
There's a group of employees that really believes in this
leader that goes over there and they're arguing and who's
going to be the next leader, and do we need
to get rid of this guy? Do we not? You

(15:06):
need to get rid of this guy? But you've talked
to so many people as you start to see how
people are interacting with each other. You realize, Wait, this
is an inner office drama or personalities coming up against
each other. This is people who fundamentally believe something fairly deeply,
and those things are really clashing against each other.

Speaker 2 (15:26):
Yeah. The shocking thing was as I was interviewing people
who I identified as part of the Boomer group or
part of the Doomer group. I mean the boomers, like
their eyes would light up when they were talking about
this potential future where prosperity was abundant and everything would
be perfect, and the doomers, like I spoke to people

(15:48):
whose voices were quivering with anxiety. This was a genuine emotional,
visceral reaction to the ideas. Wow, we only had a
few years left on this earth potentially if we did
not figure out how to get a handle on this
technology and make it go well instead of badly. And

(16:09):
that's when I began to understand more deeply. Oh, there
are so many more layers to all of the headlines
that we see about this company, about this technology, and
about the dramatic firing and rehiring of Sam Woman. There's
so many deeper spiritual layers behind the clashing that's happening

(16:29):
to shape this technology.

Speaker 1 (16:31):
But really, what is this spiritual fight that we're trying
to have here? More on that after the break. So
it's interesting that people sometimes compare Sam Altman, the head

(16:52):
of open Ai, to Steve Jobs. You say in the
book that in some ways he's this generational talent, but
you also explain that there's a lot of people who
really just don't trust him, that he's polarizing, because obviously
there's a way in which I mean Steve Jobs is
also a polarizing figure too. There's people who think he's
an absolute genius. There's people who think he's a jerk

(17:12):
who just stole ideas, and there's people who think he's
a genius and he's a jerk. Yeah, yeah, right, But
then if you think here about what are the stakes
of what Steve Jobs was doing? If you want to
give him all the credit, yeah, is what was he doing?
What's the mission? Make beautiful products, make very easy to
use computers great? What are the stakes for open ai? Actually,

(17:34):
hold on, let me not even try to answer this.
What is open AI's mission because that's something that seems
to change through the book. So what is open ai
trying to do here?

Speaker 2 (17:44):
Yeah? So their mission, which on paper has never changed,
is to ensure artificial general intelligence benefits all of humanity.
That's the direct quote. The challenge is that each of
these components of its mission are extremely ill defined. I mean,
there used to be a joke with an open eye.
If you ask thirteen researchers at the company what artificial

(18:08):
general intelligence is, you'll get fifteen answers. And it's pretty
true for all the components of the mission. So what's
happened over the course of the organization's history it originally
started as a nonprofit, now it's one of the most
capitalistic companies in Silicon Valley history, is that different people
interpret the mission in a fundamentally different way. The Boomers

(18:31):
interpret benefiting all of humanity as build this technology as
fast as possible and unleash it onto the world as
quickly as possible. The Doomers interpret as build this technology
as fast as possible and hold onto the technology so
that we have a lead time to do more research
on it before bad actors have a chance to do
research on it as well. And then there are plenty

(18:52):
of other people who are not necessarily in the boom
or a dumer category that are just regular tech company,
people who came from Facebook, who it came from Google,
who came from Microsoft, who are just like the mission
for benefiting all of humanity is building products that people
want to pay for. There's such a vast range of
interpretations that essentially all the mission does is it just

(19:16):
allows people to put a mirror up to themselves and
say what is it that I want and to make
themselves the protagonist of their own story and say what
I want is the most beneficial for humanity. So that's
my interpretation of what Opening I should be doing, and
that is part of the reason why Opening I has
had so much drama through its history, because no one

(19:37):
can ever agree on the most fundamental building block of
the company. No one can agree on what direction they
should actually be going.

Speaker 1 (19:48):
Speaking of benefiting all of humanity, a couple episodes ago,
we did an episode about the impact of AI on
the environment, climate change, things like that. But then there's
this kind of background for anybody who's really interested in AI,
really kind of AI promoter. There's this counterclaim that, Okay,
any bad stuff we do to the environment, Ai'll fix it. Yeah,

(20:09):
AI can fix climate change. Yeah, you've run up against
this claim in person more than I have. Have you
been able to make any sense of that? What's the
argument here that AI is going to fix climate change
or AI can help there?

Speaker 2 (20:23):
The most charitable relaying of the argument is for people
who believe that human intelligence is fundamentally computable, and therefore
if you have enough data and you have enough compute,
you will inevitably be able to recreate it and create
so called artificial general intelligence. Then you should be able

(20:44):
to solve any problem at that point, because the challenge
with our as humans, our inability to deal with the
climate crisis is a lack of cooperation and digital intelligence.
They won't have egos, they won't be like clashing against
each other, so the saying goes, and so they won't

(21:04):
have any issue cooperating, and they also won't have any
lack of ideas and ability to experiment, develop new energy
storage solutions, develop new forms of renewable energy, and so
on and so forth. So that is the kind of
most charitable version of why artificial general intelligence could fundamentally

(21:25):
solve climate change. My critique is, again, we don't have
scientific evidence that AGI will ever come to pass, and
so we are basically trying to justify current day vast
environmental harms and the acceleration of the climate crisis with
a speculative possibility that it might one day be able

(21:47):
to go away. And so we're essentially trying to cover
up real scientific evidence of present day reality with a
spiritual belief that it'll all be okay in the end.

Speaker 1 (22:01):
Listen. I would go further. And I mean my issue
with this is even if an AI agent, your chatbot,
can spit out the answer, Hey, here's what to do,
do this, do this, and do this, and climate change
will grind to a halt. What if we're not interested
in listening, you know what I mean?

Speaker 3 (22:18):
Yeah? Right?

Speaker 1 (22:19):
You could get on chat YOUBT, you can get on
claud you can get on Gemini or whatever and say, hey, hey,
AI friend, I'm I'm feeling really sick. I'm eating all
of this cake and I really don't feel good. What
should I do? And I'll say, hey, buddy, stop eating cake.
I'm gonna keep eating cake.

Speaker 3 (22:34):
Yo.

Speaker 1 (22:34):
You don't actually have to listen. The computer can't make
you listen, even if it has the answer. And here
in the States particularly, we've got the data. There are
some ideas on things we could do to reduce the
impact on the environment.

Speaker 2 (22:48):
We're just not doing them exactly. I think this is
one of the things that has always broken down in
these theoretical future AGI arguments of whether it's going to
be fundamentally positively or negatively transformative. There's never a clear
articulation of how it's operationalized in the physical world. Are
they going to have robot bodies, are they going to
be mining the earth and developing and cultivating these new

(23:12):
energy storage solutions, or are they directing humans to do that,
at which point we still run into all of the
same problems that we've always right pad, which is humans
don't listen. So I think you're hitting upon one of
the core weaknesses of just many of the arguments in
this community is they do not actually acknowledge the social, political,

(23:37):
and economic aspects of the way that technology ultimately impacts society.
Like it's not just you have a technical capability and
everything is suddenly solved. There's so many more layers to
how the capability then translates into real world impact.

Speaker 1 (23:58):
Yeah, I'm sympathetic to some of the ideas that are
put forth, like AI could cure diseases. Hey, look, you
can mix these three chemicals and it gives you a
pill and it's two bucks. Give it to everybody. Okay,
I'll buy that. And also healthcare obviously has a huge
social component to it. But yeah, something that truly is

(24:18):
linked to our behavior. I struggle to see how exactly
that's going to work.

Speaker 2 (24:24):
What's interesting about the drug discovery or curing diseases thing, too,
is there's a word game that people within the AI
world play where they talk about when they talk about
AGA is going to cure cancer. What's confusing is that
there are plenty of AI technologies that can help us
advance and tackle that challenge, but it has nothing to

(24:48):
do with the type of AI that Open AI or
the rest of these Silicon Valley companies are building. So
they play this game where they just say AI and
AI is a huge umbrella term. It's like the word transportation,
like you could be talking about a bicycle or a bus,
or a gaskillsing truck or a rocket. These are all
different forms of transportation, and they're building rockets, but they're

(25:12):
pointing to the benefits of bicycles and public transport. And
so when these companies say AI is going to help
us cure diseases. There are plenty of AI technologies that
literally have no relation to what they do that are
making positive impacts on healthcare. There are machine learning models
that can be trained on MIRI scans to identify cancer,

(25:34):
and there have been studies that show that if you
give these tools to trained radiologists, they will be able
to identify cancer far earlier with a much higher accuracy,
such that patients can actually intervene early on and have
a much higher likelihood of kicking that disease. There is
also Last year, the twenty twenty four Nobel Prize in
Chemistry was awarded to a team at DeepMind that pre

(25:58):
them joining this large language model race. They developed this
tool called alpha fold and it was able to predict
with extremely high accuracy all of the protein structures of
different amino acids, and that in and of itself is
going to be one extremely critical building block for understanding
disease and for discovering new drugs. That was trained on

(26:19):
extremely clean, highly curated data sets. It was just amino
acids and protein folding structures. That was a very task
specific AI model that then was able to do incredible
things because fundamentally it was a well scoped highly computational problem,
and that's what AI is good at. You throw AI

(26:39):
at a highly computational problem and it will compute. But
what these companies are doing, and the critique that I
have is they pretend that they're trying to build everything machines,
and they're trying to do that by then scraping the
entirety of the English language Internet with just a boatload
of polluted data that has nothing to do with healthcare.
But it's just like people throwing curse words at each

(27:00):
other online and then they're like, this is going to
cure cancer, and it's like what are you smoking? Like, like,
how is that gonna get us to? Like what? Like,
we already have these other AI technologies that are making
these advancements, that are being trained on actual, high quality

(27:23):
data sets. And then you're gonna pump a bunch of
gunk into this large, nebulous, large language model that really
has very little articulated purpose and say it's going.

Speaker 1 (27:35):
To do the same thing and pump gunk can do
the environment.

Speaker 2 (27:38):
And pump gunk into the environment and exploit a lot
of labor and get like potentially automate away a ton
of jobs in the process.

Speaker 1 (27:46):
So, yeah, there are people in Silicon Valley who love
to pitch AI as a silver bullet for everything from
cancer to climate change. But of course the reality is
not that simple. What this company or that company you're
building could really solve some of these problems. But as
they're doing it, they're also doing things that you might
have read about before, not in Forbes or in Wired magazine,

(28:09):
but in your middle school history textbook. Companies in the
industry are starting to act a lot more like empires,
Empires that extract resources, expand rapidly, and then leave communities
to deal with the consequences. We'll get into some of
those consequences after the break. Some of the consequences of

(28:40):
AI's colonialism you might already be able to imagine, like
when Google went to a part of Chile that was
in the middle of a huge water crisis and tried
to build a massive data center. If you think back
to our episode on AI and the Environment, you know
that data centers use up a ton of water that
didn't seem to matter to Google. But it's not just environmental.

(29:03):
In the book, Karen also talks about how the industry
always seems to go to countries undergoing some kind of
economic crisis and then find workers there to exploit. Karen
found that open ai used an intermediary company to contract
workers in Kenya for less than two dollars an hour
to build automated content moderation filters. For these filters to work,

(29:24):
you need humans to catalog, sometimes hundreds of thousands of
examples of things like the graphic content that OpenAI wanted
to prevent its model from generating. One worker on the
quote sexual content team had to review fifteen thousand pieces
of sexual content per month. And we're not talking just
regular porn, I also mean child sexual abuse material. Some

(29:48):
of this material was even generated by open AI's own software,
so that again they could check if the filters were
catching the bad stuff. But the workers who were manually
sifting through this stuff day after today, it caused some
serious mental consequences environmental exploitation, worker exploitation. This sounds like

(30:08):
straight up empire behavior, and it feels familiar. You could
take a map of the colonial powers and the colonies
like India, Latin America, parts of West Africa from a
couple centuries ago and lay it over a map of
where these AI companies are exploiting now, and it would
line up almost exactly. So we've read about this stuff

(30:29):
in the past in school. So now what this brings
me to a question? Though I don't know if this
is pushing back or maybe I'm just a little bit
pessimistic here, But if you talk about colonialism, you know,
empire's imperialism, those words mean different things in different places. Absolutely,
I think if you go to a place that was colonized,
it's going to bring back memories of slavery, members of exploitation,

(30:52):
memories of poverty, starvation, generations of wrecked governments, things like that.
For an American, we think colonial is a furniture style,
you know what I mean. We think it's a we
think it's a cool way to build your house. Oh God,
let's be real. That is so real. We'd be real.
And so I think about this, and I think about
your explaining imperialism of AI companies, and I think one

(31:16):
of the features of an empire, one of the features
of a colonialist empire is not only the leaders, but
the people who live there also start to believe that
that's just the natural way of things, that oh, there
are people in Kenya who are being forced to see
just absolutely horrific carnage imagery and getting paid cents and oh, well,

(31:38):
that's just the way things go. That hey, third world country,
Global South, that's what happens there and it doesn't happen here.
Let's just be real. How do you make a reader
who is in the empire understand that?

Speaker 2 (31:51):
You know, I went on book tour. I stopped in Seattle,
and I had this wonderful opportunity. You talked to Ted Chang,
one of the most decorated science fiction writers ever, and
we were talking about this exact question, and he told me,
and I think he's exactly right. He was like, your
job is not to convince the people that are already
convinced of something completely ideologically opposite to you. If someone

(32:16):
already is convinced that there should be a hierarchy in
the world and that certain people don't deserve fundamental, basic
human rights like you do, not waste your time convincing them.
Who you're trying to convince is the broader public and
people who just don't really know how to think about
this technology and don't really know how to interface with it. Ultimately,

(32:37):
all empires are made to feel inevitable, but all empires
fall because the majority of people under subjugation end up
rising up and protesting the empire, and those are the
people that you should be speaking to. Another feature of
empire building is there are people who are richly rewarded
by empire, usually the people that are most powerful politically

(32:58):
and economically. Those are the people that are rewarded, and
that's why empires are able to perpetuate so long in
the first place, because the people who are totally coddled
into believing that this is a great state of affairs
are the people that then have the most access to
all of the levers to maintain the status quo. And
so he was like, don't talk to those people. Those
are not the people that you should focus on, because

(33:20):
they're never going to change their minds, like both philosophically
because they don't see a problem with an exploitaive, extractive worldview,
but also because all of the evidence that they are
exposed to reinforces the idea that things are just fine,
and so focus on everyone else, focus on rallying the

(33:41):
majority of the world around this idea. When I tell
people this empire metaphor, outside the US, no one has
ever questioned me. When I was talking with Chilean water
activists for my book about the expansion of data centers
in their community, they were first to bring up the
relation to their history, their colonial history. Really yeah, I

(34:05):
didn't actually even say it. They were like, what's happening
now is what's happened to us for centuries, first at
the hands of Spanish colonizers, then at the hands of
American multinationals, and now at the hands of new American multinationals.
And to your point that, like every different country experiences
colonialism differently, I mean, it was remarkable how they still

(34:29):
experience it differently, but in exactly the same way as before.
So in Kenya they were like, there is a connection
between slavery and labor exploitation happening now with the AI industry.
And in Chile they are a country that has been
extracted for their natural resources again and again. And the
term extractivism, which is an anti colonial term that refers

(34:54):
to the idea that massive amounts of resources are extracted
from one place and used to benefit a far away place,
no benefit goes to the local community. That was originally
about the Spanish and Portuguese term coined by scholars in
Latin America extractavismo or estrachivismo in Spanish and Portuguese, and
they were like, this is extractivism, right, We've seen this.

(35:16):
There's a connection between the Spanish colonial extractivism and the
air industry's extractivism. So it literally is a replaying. Yeah,
and people who live that history and understand that history,
there's no leap that they have to make to connect
the two.

Speaker 1 (35:32):
That makes a lot of sense. And to be clear,
this isn't just happening in what we consider quote the
global South. It happens in the US too. That example
of Google trying to build a data center in Chile
despite the residents not wanting it might sound familiar. In
our episode on Ai in the Environment, we talked a
little bit about Memphis, where Elon Musk's company Xai has

(35:56):
been running gas turbines without permits for months now. It's
making people there sick. Just a couple of weeks ago,
More Perfect Union published a deeper dive on what's happening
in Memphis that gives some more details, and Xai has
already started building a second data center. So in Chile,
the local people organized and successfully stopped Google from building

(36:19):
that data center in Memphis. That fight is still ongoing.
But even if they're able to shut those turbines down tomorrow,
what happens to the people who were already hurt by
what XAI has done to the environment there. It's just
another example of what Karen Howe describes as the rise
of AI empires, taking resources, dodging accountability, and leaving communities

(36:42):
to deal with the consequences. You talk about some potential
alternative ways that this is going to be used or
this could be used, and so you're not necessarily talking
about everybody turn off the computer type thing. But yeah,
just tying in with the title, where's the kill switch
on this thing? And what hitting the kill switch?

Speaker 2 (37:00):
So to me, hitting the kill switch in this context
is killing the imperial conception of AI development. Not killing
all AI development, but the imperial one where people at
the top can just say this is how it's going
to go and then consume the entire world of resources
in a pursuit of a Morphis vision of progress. The
thing that I want to see and I think how

(37:21):
we can get there. I want to see broadly beneficial,
task specific AI models that are developed and deployed through
the participation of communities. And when you think about the
AI supply chain, which I try to make visible in
my book, there's data, there's land, there's energy, there's water,
there's labor. There are spaces that companies need access to

(37:45):
deploy their technologies, like schools and hospitals and government agencies.
Silicon Valley has done a really great job over the
last ten years of making people feel like these resources
in these spaces are actually owned by Silicon Valley, but no,
they're owned by us. That day is your data, it's
my data. That in intellectual property is the intellectual property
of artists, writers, creators. The schools that's collectively owned by

(38:08):
teachers and students. Those hospitals are collectively owned by doctors, nurses,
and patients. And we're already seeing movements around the world
of people actually fighting back and reclaiming ownership of those
resources in those spaces. So artists and writers that are
suing these companies saying no, you can't just take our
intellectual property. The Chland water activists that are write about
in my book who said no, you can't just take

(38:29):
our fresh water and successfully stalled Google from building a
data center within their community for now five years, many
many movements around the world that are replicating that to
push back against data centers, teachers and students that are
having public debates now saying do we actually want AI
in our schools and if so, under what terms? Because

(38:49):
we wanted to foster curiosity and create a critical thinking,
not just totally eroded away. And if we can have
those conversations one hundred thousand times full and start moving
more towards task specific AI technologies, we will get to
a place where we do have AI that is broadly

(39:11):
beneficial and actually works for the people rather than us
working for AI.

Speaker 1 (39:17):
Thank you, Thank you so much. Hope to be able
to chat with you again.

Speaker 2 (39:20):
Thank you so much. Dexter.

Speaker 1 (39:26):
All right, so this is the part of the episode
where I usually have some kind of closing thoughts, you know,
at my little two cents, three cents go on a
little bit, But this time I'm going to keep it
to four words. Go read Karen's book. Seriously, Just go
read Karen's book. One thing we didn't talk about, and

(39:46):
one thing I really do like about the book that
I should say is that you could pick it up
with absolutely no idea how AI works and you'll not
only understand the societal implications that you know we kind
of talked about in this episode, but you'll come up
with a better understanding of the technology of AI than
most people, even if you're one of those people who
says you don't like computers, you hate computers, you don't

(40:08):
understand them for real. It breaks it down in a
way that I've never seen done before. So highly recommend it.
And if you've already read the book or you just
want to talk about something else, let us know what
you think. We're on Instagram at kill switch pod, or
you can hit me at dex Digi that's d e
x DGI again on Instagram or am alsa Wan Blue Sky.

(40:30):
And if you haven't done it yet, leave us a
review on your favorite podcast platform. People actually read those things,
and your review could be the thing that convinces someone
or someone's to check us out. And this show is
hosted by Me Dexter Thomas. It's produced by Shena Ozaki,
Darluk Potts, and Kate Osbourne. Our theme song is by

(40:51):
me and Kyle Murdoch, and Kyle also mixed the show
from Kaleidoscope. Our executive producers are Ozma Lashin, Mangesha Gadur
and Kate Osborne from iHeart Our. Executive producers are Katrina
Norville and Nikki Etor catch On the Next One, Goodbye,

kill switch News

Advertise With Us

Follow Us On

Hosts And Creators

Oz Woloshyn

Oz Woloshyn

Karah Preiss

Karah Preiss

Show Links

About

Popular Podcasts

On Purpose with Jay Shetty

On Purpose with Jay Shetty

I’m Jay Shetty host of On Purpose the worlds #1 Mental Health podcast and I’m so grateful you found us. I started this podcast 5 years ago to invite you into conversations and workshops that are designed to help make you happier, healthier and more healed. I believe that when you (yes you) feel seen, heard and understood you’re able to deal with relationship struggles, work challenges and life’s ups and downs with more ease and grace. I interview experts, celebrities, thought leaders and athletes so that we can grow our mindset, build better habits and uncover a side of them we’ve never seen before. New episodes every Monday and Friday. Your support means the world to me and I don’t take it for granted — click the follow button and leave a review to help us spread the love with On Purpose. I can’t wait for you to listen to your first or 500th episode!

Las Culturistas with Matt Rogers and Bowen Yang

Las Culturistas with Matt Rogers and Bowen Yang

Ding dong! Join your culture consultants, Matt Rogers and Bowen Yang, on an unforgettable journey into the beating heart of CULTURE. Alongside sizzling special guests, they GET INTO the hottest pop-culture moments of the day and the formative cultural experiences that turned them into Culturistas. Produced by the Big Money Players Network and iHeartRadio.

Crime Junkie

Crime Junkie

Does hearing about a true crime case always leave you scouring the internet for the truth behind the story? Dive into your next mystery with Crime Junkie. Every Monday, join your host Ashley Flowers as she unravels all the details of infamous and underreported true crime cases with her best friend Brit Prawat. From cold cases to missing persons and heroes in our community who seek justice, Crime Junkie is your destination for theories and stories you won’t hear anywhere else. Whether you're a seasoned true crime enthusiast or new to the genre, you'll find yourself on the edge of your seat awaiting a new episode every Monday. If you can never get enough true crime... Congratulations, you’ve found your people. Follow to join a community of Crime Junkies! Crime Junkie is presented by audiochuck Media Company.

Music, radio and podcasts, all free. Listen online or download the iHeart App.

Connect

© 2025 iHeartMedia, Inc.