Episode Transcript
Available transcripts are automatically generated. Complete accuracy is not guaranteed.
Speaker 1 (00:09):
I've lived in LA for a decade, and this whole time,
I haven't owned a car. When I tell people that,
they usually look at me weird. And yes, riding the
bus and walking and using a bike is less convenient,
but at this point I'm used to it. But sometimes
I do wonder if I should just give in and
buy a car like everyone else. So to help me decide,
(00:31):
I did what a lot of people do recently when
they're weighing options. I asked AI. I opened up Claude
dot AI and I input all my current transportation costs.
I put in my bus fares, my bike, the cost
of my occasional uber and I asked it to compare
that to the average costs of car ownership in Los Angeles,
so parking, gas, insurance, repairs, all that sort of thing.
(00:56):
And I asked it to give me the pros and
cons on either side, and it did. The big con
is convenience, which I already knew. And on the pro side,
it said that I was saving thousands of dollars per year,
but it added one extra thing. It said that I
could have the nice feeling of knowing that I was
also being eco friendly. And I thought, hold on, wait
(01:18):
a second, eco friendly, I just spent half an hour
running scenarios through an LM, which I know is built
off the back of a massive amount of computing, which
in turn means a massive amount of energy. So am
I actually helping the environment here? Or am I hurting it?
So this week I set out to answer what seems
(01:40):
like a pretty simple question, how bad is AI's environmental impact? Really?
And yes, before you ask, I did consider asking Claude
and maybe chat GBT about AI's own impact on the environment.
But then I figured, you know what, maybe this is
a question I should ask actual human beings. And I
found a couple of people who've been studying this stuff
(02:02):
for a while to help me parse all of this.
Is there a way that I can compare? Is my
AI usage worse or better than car usage, or worse
or better than my impact on the environment from eating
meat or something like that? Are we able to make
those kinds of comparisons.
Speaker 2 (02:20):
That's what carbon footprints were kind of invented for, so
you can make this type of comparison. If you're driving
in fossil fuel based car, you know exactly how much
gas you're using and what that might mean in terms
of carbon emissions. That's pretty straightforward. It's much harder to
do this for AI.
Speaker 1 (02:39):
I'm afraid from Kaleidoscope and iHeart podcasts, this is kill switch.
I'm Jackster Thomas.
Speaker 2 (02:52):
I'm sorry.
Speaker 1 (03:21):
If you use social media, you've probably seen people being
criticized for using AI, and depending on who you hang
out with, that criticism can be kind of different in
how it shows up. When I see someone post an
AI generated image or some AI generated text and there's
angry comments in the comment section, it's usually one of
two types. The first is people saying that it's disrespectful
(03:45):
that by posting AI generated poetry or drawings you're devaluing
the original artists who didn't consent to having their work
fed into an l l M. That one's pretty easy
to understand, even if you don't agree with it or
think it's a big deal. The other comment I see
a lot of is people saying that using AI is
destroying the environment. Figuring out whether that's a big deal
(04:07):
or not is a little bit less straightforward.
Speaker 2 (04:09):
What I tried to do in my research, I tried
to keep track of how the global electricity consumption of
AI is developing.
Speaker 1 (04:18):
Alex Deviries is the founder of digit Economists and a
PhD candidate at the VU Amsterdam. He's been researching the
sustainability of new technologies for about a decade.
Speaker 2 (04:27):
The way I do that is by looking at how
many machines of specialized AI's devices are being produced by
the AI hardware supply chain, and then considering that power
consumption profile how much power is now being consumed by
all of these devices. Which is a very imperfect way
of keeping track of this, but it's kind of like
the only rule you have available at the moment.
Speaker 1 (04:48):
And even Alex is having a hard time keeping up
with this. What I called him. He was in the
middle of putting together new research. Back in twenty twenty three.
His data showed that by twenty twenty seven, new AI
service sold could use the same amount of energy annually
as the yearly energy consumption of a country like Argentina
or the Netherlands. But things have accelerated. His current research
(05:10):
shows that it won't take until twenty twenty seven for
that to happen. At this rate, we're going to hit
that mark sometime this year.
Speaker 2 (05:18):
Simply because now the amount of devices that's being produced
by the AI hardware supply chain is way higher than
it was two years ago.
Speaker 1 (05:26):
So it's even exceeding your pretty bleak estimations that you
made a while ago.
Speaker 2 (05:32):
Oh yeah, it's just that the hype is so big,
and then the mod for this type of hardware is
so big that the numbers are going up much faster
than could be anticipated just two years ago.
Speaker 1 (05:43):
But hold on, before we get too much further, let's
just clarify what we're even talking about when we say AI.
If you could break it down for me, how does
artificial intelligence use natural resources?
Speaker 3 (05:57):
Heah, it's a general umbrella term that includes many different things.
But right now, if you're talking to a person on
the streets random like when they say AI, they're referring
to large languine models, or maybe you meage generation models.
So these are the generaryty AI models.
Speaker 1 (06:14):
Chale Wren is an associate professor of electrical and computer
engineering at the University of California, Riverside, and he's kind
of a colleague of mine. Our fields are completely different.
But a couple of years ago I taught a class
in the building right next to his. I'd had no
idea that on the same campus there was an expert
who'd been researching the environmental impact of generative AI the
(06:36):
whole time, and I thought, perfect, this guy's kind of
a colleague, so I can stop doing all this research
on my own and just go ask him. Can you
give me an idea of how, say, car usage compares
to usage of an AI model.
Speaker 3 (06:54):
I would say having a large language or medium sized
language model. Right, roughly ten short emails could be consuming
a quarter of the electric kill what hour energy, So
that's roughly enough to drive a test in a Model three.
Speaker 2 (07:11):
For one mile, or as Alex puts it, Chat GPT
must be running on like something like five hundred mega
what hours a day, which is enough to power a
small city.
Speaker 1 (07:21):
Basically, chat GPT's overall daily energy use, it's about the
same as powering every home, every grocery store, every street
light in a small city like San Luis, Opistpo in
California or Ithaca in upstate New York. But what does
that actually mean for me and you? How much energy
does it take to just ask chat GBT one question
(07:44):
on a.
Speaker 2 (07:44):
Per interaction basis? It's actually not that much. You're talking
about something like three one hours maybe per interaction that's
something like a low lumin led build that you have
running for one hour. It's not a lot of power,
but it's it's nevertheless significantly more than a standard Google.
Speaker 1 (08:00):
Search in one second. Just as an aside here, we
usually don't think of something like a Google Search as
using electricity. I mean, your phone or your computer is
already on, so what does it matter if you're typing
stuff into it or not. But on the other end
of that Google search you typed in, they're servers and
those are using energy. So as we keep going in
(08:20):
this episode, maybe think about that that on your end
you're not seeing any energy used or environmental effects, but
doing a Google search, watching a video, or even downloading
this podcast that does use some amount of energy.
Speaker 2 (08:35):
Even Google's CEO at some point commented, like, hey, interacting
with these large language models, it takes ten times more
power than the standard Google search. So and that would
mean that if you're talking about three one hours for
interaction in a large language model, for a standard Google search,
it would be like zero point three one hours, which
is a very very tiny amount.
Speaker 1 (08:56):
Just to explain here, a what hour is a unit
that tells you how much energy device uses over time.
For example, a sixty watt light bulb running for an
hour uses sixty watt hours. A single Google search uses
about zero point three watt hours. That's enough to power
that same light bulb for around eighteen seconds. But now
(09:16):
there's that AI add on that comes stacked by default
on top of every Google search, which takes that number
up ten x, up to three full watt hours per search.
That's a little different now you're running that same light
bulb for three full minutes and then but.
Speaker 2 (09:32):
It's of course in the number of interactions where these
numbers start to stack up quickly, Because if you're talking
about Google Skill, you're talking about nine billion interactions a day,
going as three one hours per interaction. Then, interestingly, the
whole company Google would require as much power as Ireland
just to serve as a search engine.
Speaker 1 (09:51):
If that was the case, Wow, using as much power
as a small country sounds wild, but if we think
about it, it kind of makes sense. We've default ten
xed our energy use overnight across nine billion searches a day.
That energy use is going to add up pretty fast.
But there's another thing to consider when we talk about
(10:13):
AI's energy use. The difference between training the model or
giving it a bunch of data to teach it how
to work, and using it like when you ask it
to write a cover letter or I ask it if
I should buy a car. When we talk about AI
and the energy consumption that can go into AI, there's
different phases, right. There's the training phase. There's me actually
(10:34):
sitting down and asking you an agent, a question. Can
you break that down for me?
Speaker 3 (10:39):
The training part we call it learning, so based on
the data we try to optimize the parameters so that
when we see some new queries from the users, we
can give you as equate an answer as possible. And
training is really one time. Of course, later we're going
to do some update fine tuning. Inference is when the
users actually interact with the model, and depending on the
(11:01):
popularity of the model, but once it gets trained, it
could be used by many hundreds of millions of times
or even billions of times if you train the large
lunguine model like LAMA three point one. According to the
data released by Meta Training, a large lan grain model
like that air pllutant we gamerated through the training will
be roughly equivalent to more than ten thousand round trips
(11:24):
by cart LA and New.
Speaker 1 (11:25):
York City ten thousand round trips by car. Yeah, so
that sounds bad, that sounds like a lot. But is
that a one time and it's just the one time.
Speaker 3 (11:36):
It's a one time.
Speaker 1 (11:39):
Let's clear something up here. That number ten thousand round
trips from LA to New York by car. It's not
just about carbon it's about air pollution, specifically things like
nitrogen oxides and fine particles that come from power plants
and can get deep into your lungs. This isn't theoretical.
This is stuff that raises risks of diseases like cancer,
(12:00):
and it doesn't just affect people next to the place
where all those computers are. Pollution travels and it lingers.
So what Challet's talking about here isn't just numbers. His
calculations are showing that training a single model the size
of Metaslama three point one can produce that level of
pollution on its own. So yes, training these models is
(12:21):
a one time hit, but it's a big one. If
we're talking just about energy usage, using an LM to say,
write ten emails might be like driving an electric vehicle
for a mile. And since an electric vehicle was maybe
three times more efficient than a gas vehicle. Figure, those
ten emails might get you a quarter to a third
of a mile in a regular car. And yeah, maybe
(12:43):
these are relevant numbers for me and my decision about
whether or not my AI usage is counterbalanced by me
not having a car. But these numbers are just estimates,
and we are going to get to that. But the
bigger issue here is that running those data centers doesn't
just use electricity. And this is where Chalet's research comes
in because we've heard about AI's carbon footprint, but what
(13:06):
about its water footprint, which could be a much bigger
concern for us living here on earth. That's after the break.
(13:26):
So you had a study come out last year called
making AI Less Thirsty, uncovering and addressing the secret water
footprint of AI models. What made you want to look
at this?
Speaker 3 (13:37):
Maybe that was due to my childhood experience. I spent
a couple of years in a small town in China
where we only had water access for half an hour
each day, so we just had to think about how
to use water wisely and to every possible means to
save water. Then in twenty thirteen, oh I saw this issue.
I want you to find out more about it. What
(13:57):
about the water consumption and nobody new at that time.
Speaker 1 (14:01):
A big environmental impact we don't talk about as often
its carbon emissions is water usage. And the impact that
water usage has on all of us depends on where
that water comes from and where it goes when it
comes to AI. A main use of water is to
cool down the data centers, which, as we know, use
a lot of energy. This is how they make sure
(14:22):
that they don't overheat.
Speaker 3 (14:23):
To prevent syrup from overheating, usually we use water evaporation,
and that's a very efficient way to move the heat,
to dissipate the heat to the environment, and this water
evaporation could be in the cooling towers. That is essentially
evaporating water twenty four to seven.
Speaker 1 (14:38):
When water evaporates from a data center's cooling system, it
goes out into the air and is basically considered gone,
at least from the local supply. You might be thinking
of the water that you use when you take a shower.
How that water goes down the drain, It gets treated
and it can be reused. But evaporated water rises up
into the atmosphere and you can't reuse it. It can
(15:00):
eventually come back down as rain, but that takes a while.
Speaker 3 (15:04):
Some tech companies they can use over twenty billion liters
of water each year.
Speaker 1 (15:09):
Twenty billion, Yeah, that.
Speaker 3 (15:11):
Number basically is the same as some major beverage companies
annual water consumption, the water they put into their product,
basically the water we drink from a bottled water. Those
are the water consumption for the beverage industry. So in
some sense, this AI is turning these tech companies into
a beverage company in term of water consumption.
Speaker 1 (15:32):
Nobody's drinking that bottled water or those sodas is just evaporating.
Speaker 3 (15:38):
Yes, yes, yes.
Speaker 1 (15:40):
One important thing here is that when Challte's talking about water,
he's talking about a specific kind of water. For example,
you might have heard that for every kilogram of beef,
it needs fifteen thousand liters of water. But ninety percent
of that water is what's called green water. That's water
that's naturally stored in soil and used by plants like
(16:00):
rain water. It doesn't have to be clean enough for
people to drink it. It would be nice if data
centers could use that, but that's not really practical for
their usage. They rely on what's called blue water, the
stuff that's clean enough for humans to drink. So when
Chalet is comparing a tech company's usage of water to
say like Pepsi's global use of water, this is a
(16:23):
pretty direct comparison. Use the phrase when you're evaluating GPT three,
that GBT three needs to drink a certain amount of water.
Speaker 3 (16:32):
A rap flight ten to fifty queries for five hundred
million digits of water, so basically a bottle water.
Speaker 1 (16:38):
Let's pause on that number for a second. Ten to
fifty queries the kind of thing you might do in
a single session using chat GPT that could drink half
a liter of water. I'm pretty sure that me going
back and forth about buying a car, I probably used
about a leader, and that's using conservative estimates. Challeat and
his team. We're focusing on GPT three, which was released
(17:01):
back in twenty twenty. Even five years later, OpenAI hasn't
released all the details researchers would need to give us
a clear picture of its environmental impact. Do the companies
know how much water that they're using?
Speaker 3 (17:13):
Of course I can't really speak on their behalf, but
I think they do. They could figure out the water
consumption easily because they know their energy they know their
water efficiency of the courting system, they know where they
build the data centers, so they have the information, but
we're not seeing their own disclosure.
Speaker 1 (17:29):
By this point you might be picking up on a
recurring theme here. Putting a specific number on the impact
of AI is basically impossible, and it's not because the
math is too difficult.
Speaker 2 (17:41):
The thing is the tech companies are also refusing to
tell us exactly what's going on. So if you take
Google's environmental report, it will show you the numbers are
bad because in twenty three they show that their carbon
emations were up like fifty percent compared to five years before,
and they were pointing to AI as the main culprit.
They were saying, Okay, data center infrastructure is adding to
(18:02):
our combon emissions, we're using more electricity. And at the
same time they just don't specify exactly what's going on
with regard to AI. They say that making distinctions is
not meaningful at all, even though weirdly, Google was the
company that just three years ago was in fact making
this distinction. They were disclosing the ten to fifteen percent
(18:24):
of their total energy costs were related to artificial intelligence.
Now they stop doing that they don't want to tell
us anymore.
Speaker 1 (18:30):
All of a sudden, they it seems like something changed there.
What do you think changed?
Speaker 2 (18:35):
The numbers got big, that's what's changed.
Speaker 1 (18:38):
Okay, not to spoil the end here, but it looks
like I'm not going to get a direct answer to
my question. But at least I have something of a ballpark,
even if it's a conservative one. And I also know
that we're using AI every day for everything. We might
not know the exact environmental impact of AI, but we
do know that it's increasing, So what do we do
(19:00):
about it? That's after the break. So in this episode,
we've been having some trouble figuring out the exact environmental
costs of AI. But this is a pretty common problem.
I mean, my friend Matthew Galt wrote up an article
(19:22):
at four or four Media explaining that the Government Accountability Office,
which is a nonpartisan group that answers to Congress, is
struggling with the exact same thing. They came up with
roughly the same numbers that we talked about earlier. They
put together a forty seven page report that acknowledges that
even after interviewing agency officials, researchers, experts, they're still left
(19:44):
with having to do estimates because, as they said, quote,
generative AI uses significant energy and water resources, but companies
are generally not reporting details of these uses. So even
the US government has no idea exactly how much carbon
we're pumping out or how much water we're pouring into
the sand. And this is an issue because when researchers
(20:09):
like Chalet and Alex were first looking into AI's environmental impact,
the biggest concern was training. That's the one time process
of feeding those massive data sets in the powerful machines.
That's what was making headlines for energy use. But then
came chat GBT three and suddenly people weren't just training models,
(20:29):
they were using them all the time, and that shift
changed everything.
Speaker 2 (20:35):
As an end user, you can't even manage it properly
because the companies are not telling you. So it's not
like when you're interacting with chut GPT that judge GPT
is gonna tell you, Okay, be aware, now the carbon
footprint of this conversation has already exceeded this amount. Open
AI knows this kind of stuff. They could tell you,
but they won't, And then other people are left trying
(20:56):
to make some kind of customer to figure out what
might be going on. We also see that they are
kind of downplaying the impact of what they're doing here.
I mean, we see their environmental reports or disasters. The
carbon emitions are shooting up, and the only thing they're
saying is like, Okay, don't worry about it. AI will
solve this in a couple of years from now.
Speaker 1 (21:14):
So the thing that's causing the problem is going to
solve the problem.
Speaker 2 (21:18):
Also, yeah, that's the excuse they're using. AI is going
to solve it. It's bad right now, but everything will
be better in a couple of years, trust us. But
it's one hundred percent wishful thinking. And to be honest,
if you look at the whole history of technological developments,
even if we do end up realizing a lot of
efficiency gains with AI, this is definitely not a given.
(21:40):
It doesn't mean that our resource uses in total is
going to go down. This is the infamous Jevins paradox.
Speaker 1 (21:46):
Jevins paradox is a concept that comes up a lot
in AI recently. Basically, in the Industrial Revolution, cold powered
engines started to get more efficient, and some people assume that, Okay,
this is going to mean that now we're going to
use less coal overall, but an economist named William Jevins said, no,
this is going to have the opposite effect. As coal
(22:07):
powered energy gets cheaper, demand will increase, and total consumption
of coal won't go down, it'll go up. He was right,
and that effect seems to keep repeating.
Speaker 2 (22:19):
Despite all the efficiency gains that we had. We're not
saving on resources, we are using more resources.
Speaker 1 (22:26):
And essentially you're saying here is even if we are
able to make AI more efficient, we're just going to
use it more, and so any efficiency gains are going
to be offset by the fact that we're just constantly
using this more and more and more.
Speaker 2 (22:39):
One thing that's extra annoying with AI is that there's
also this bigger is better dynamic going on, whereas if
you make the models bigger, you'll actually end up with
a better performing model, but it just means that your
efficiency gains are completely negated all the time.
Speaker 1 (22:53):
Every chat, every prompt, every AI generated jibbli image adds up.
We just don't see that impact directly, So let's all
just stop using AI. Right, Well, that's probably not realistic
at this point, and that's not necessarily what everyone's recommending.
Speaker 3 (23:10):
So I work on optimization, and I think this is
a problem. We can optimize it, we can make it better,
reduce the cost, and there are a lot of opportunities,
so we should definitely not panic. I hope the model
developer can disclosee that cost to the users. I will
figure out should I use it now or should I
use it later.
Speaker 1 (23:27):
Let's say that I log in the chat GBT and
it says this query is going to use this much energy,
this much carbon, and this much water. And if I
have that information up front, then I, the user might
decide maybe I don't need to have it summarize the
entirety of the collective works of Shakespeare today.
Speaker 2 (23:47):
Yeah.
Speaker 3 (23:48):
Maybe. Or they could tell you if you do it later,
in one hour or in the evening, the cost will
be different. And do you figure it out? Do you
want to do it now or do it later.
Speaker 1 (24:00):
What Shelley is proposing here is that developers could build
in a system that would alert users that their query
is coming in at a high impact time of day,
and it could suggest that there might be a better
time to make that request when data centers have lower usage.
They can use optimization techniques to reduce energy consumption. This
concept isn't totally new. Google flights shows carbon emissions estimates
(24:23):
for flights and it will show you which option has
the least impact. So something like this for AI is
definitely possible, but I'm not totally convinced people would actually care.
The last time I booked a flight, I saw the
most carbon friendly option, but I didn't pick it because
it had a long layover. I didn't want to deal
with that. Putting the responsibility on users can sound good
(24:46):
in theory, but the flip side of that is it
can just be a way for companies to avoid doing
anything themselves. So should this responsibility really fall on us?
I mean, sure, you could decide to skip the chatbot
and take notes by hand, and that only really works
if you know what the trade off actually is, and
right now we don't, because the companies building these tools
(25:08):
aren't giving us the data that we would need to
make informed decisions in the first place. So maybe the
responsibility should fall elsewhere. Like policy makers, Shelley is already
thinking about what this could look like and how much
of a difference it could make.
Speaker 3 (25:22):
We're informing the policy makers, hopefully when they make decisions
they could take into account this public health burden, water consumption,
power strain on their infrastructures. These are the cost the
local people will be paying for the companies. I think,
especially for those big techs, they already have the systems
(25:42):
ready to do this type of optimization. They are doing
it for carbon orware computing. And we use the math
as location as an example. If they factor in the
public health burden into their decision making, for example, where
they route to their workload, they can reduce the public
health cost by about twenty five percent really and reduce
the energy bo by about two percent and also cut
(26:05):
the carbon by about one point three percent.
Speaker 1 (26:09):
So just by being more intentional where they route digital traffic,
a company like Meta could reduce detrimental impacts on public
health and they'd be saving some cash at the same time.
This is called geographic load balancing, and for the user
it's totally seamless. You log in your feed loads, you
don't notice anything, but behind the scenes, your request is
(26:31):
going somewhere where it's cleaner, cheaper, and less harmful to process.
Even beyond where companies route traffic, they can also consider
where they build the data centers from a public health perspective.
Speaker 3 (26:43):
When they built data centers in the future, they can
take into account this of factors because the decision that
we make today will be impacting the public health, the
water consumption, the power infrastructure for many years to come.
Speaker 1 (26:55):
Shelle is thinking about the future and research on future
optimization is a big deal because the AI boom is
already here. Big tech companies are projected to spend three
hundred and twenty billion dollars on AI technology and data
centers this year, which is nearly one hundred billion more
than last year. So where we put these data centers
(27:16):
and where we route the traffic really matters.
Speaker 3 (27:19):
Something that I was not expecting to be widespread because
I was thinking, if I leave, let's say, five miles
away from a data center or power plan, I wouldn't
be affected. That was wrong. These air pollutants are what
EPA defines as cross state air pollutants. They do travel
hundreds of miles along with a wind. We're going to
(27:40):
have a significant impact just by strategically placing the data
centers for the public health.
Speaker 1 (27:46):
What that really highlights is something that we don't think
about with tech infrastructure. It doesn't just impact the people
who live next door. When air pollution travels hundreds of miles,
it turns these data centers into regional issues, not just
local ones. I'll give you an example right here. As
we're working on this episode, I saw this article in
Politico and I just want to read you the first
(28:07):
sentence quote Elon Musk's artificial intelligence company is belching smog
forming pollution into an area of South Memphis that already
leads the state in emergency department visits for ASTHMA end quote.
That's probably enough to give you the idea. But just
to explain more, XAI, which is the company behind groc
(28:27):
which is the AI chatbot that you use on Twitter,
set up shop in Memphis with enough methane gas turbines
to power two hundred and eighty thousand homes. The company
didn't get the required air pollution permits. They're run without
the emission controls that federal law usually requires, and in
under a year of operation, XAI is now one of
(28:48):
the largest emitters of smog producing nitrogen oxides in the
entire county. And this facility is located near predominantly black
neighborhoods that are already dealing with high levels of indust pollution.
These inequalities already existed, and tech development is not making
it better, it's making it worse. It is often like this.
(29:12):
There are absolutely people who are feeling the impacts of
this right now, and there's people who will feel it
in the future. Maybe somebody will write an article about them,
maybe not. So I was hoping that I could use
this podcast to solve all my personal problems. But apparently
we're over one here, because when I started working on
(29:32):
this episode, I was thinking that this section right here,
the outro is where I'd say, Wow, now I know
exactly what impact my use of AI is having on
the planet. But I don't. And that's pretty annoying because,
and I guess this is as close to an answer
as we're going to get. It's not really about how
often I personally decide to use Chat, GPT or Gemini
(29:56):
or Claude or whatever. It's about what happens when companies
build systems that are this powerful but also this resource hungry,
and they refuse to tell us what it really costs.
And I think we deserve to know, not just so
that we can make individual choices about how often to
use Chat, GIBT or Gemini or whatever, but so that
(30:16):
we can hold the right people accountable, because if AI
is really going to change the future like they say
it will, we shouldn't know how much that future costs.
Thank you so much for listening to kill Switch. If
(30:37):
you got any ideas or thoughts about the show, you
could hit us at kill Switch at Kaleidoscope dot NYC,
or you could hit me at dex Digi that's d
e x d ig I on Instagram or on Blue
Sky if that's more your thing. And if you like
this episode, if you're on Apple Podcasts or Spotify, take
your phone out your pocket and leave us a review.
(30:59):
It really helps people find the show, and in turn,
that helps us keep doing our thing. Kill Switch is
hosted by Me Dexter Thomas. It's produced by sen Ozaki,
darl Luk Potts, and Kate Osborne. Our theme song is
by me and Kyle Murdoch, and Kyle also mixed the
show from Kaleidoscope. Our executive producers are Ozma Lashin, Mungesh Hatigadour,
(31:20):
and Kate Osborne from iHeart. Our executive producers are Katrina
Norville and Nikki e. Tour. See you on the next
one