All Episodes

March 4, 2026 30 mins

kill switch went live on March 4th to cover one of the newsier topics of the week: the fight between the US government and Anthropic, the company behind Claude. Dexter talked to WIRED senior writer, Will Knight, to discuss what exactly is happening, why, and the stakes for all of us.

Subscribe to our YouTube channel to catch our next live episode: https://www.youtube.com/@killswitch_pod

Got something you’re curious about? Hit us up killswitch@kaleidoscope.nyc, or @killswitchpod, or @dexdigi on IG or Bluesky.

Read + Watch: 

See omnystudio.com/listener for privacy information.

Listen
Mark as Played
Transcript

Episode Transcript

Available transcripts are automatically generated. Complete accuracy is not guaranteed.
Speaker 1 (00:09):
Yo, what's going on? It's Texter. So technology does not
slow down for anybody. And sometimes I wish that I
could just talk to y'all live about what's happening right now.
But then we were talking about it, we realized, wait
a second, we can talk to you live. So we're
gonna try something new here. Every once in a while,
we're gonna talk about something that's happening right now and
let people jump into the audience to ask questions, and

(00:31):
we'll bring a guest along. And so what you're about
to listen to is our first episode that we live
streamed on Wednesday, March third. So if you want to
catch the next one as we're recording it and maybe
even ask some questions, you can subscribe to our YouTube
page and you can catch the links to that in
the description. So everyone, this is we're doing something different here.

(00:57):
We're doing the first live episode of kill Switch. We're
gonna probably try to do these maybe once a month,
just to give people opportunity to actually jump into the
chat and talk about stuff that is happening right now.
So thanks for bearing with us while we experiment a
little bit. But today what I'd like to talk about

(01:17):
is what is happening with Anthropic, what's going on with
open Ai, what's going on with the US government. To
talk about it today, our guest is Will Knight, who's
a senior writer at Wired. He's been writing about AI
for a while. He's got a weekly AI newsletter called
AI Lab. Will Welcome to kill Switch.

Speaker 2 (01:35):
Welcome in, Thank you for having me. I'm wanting to
be on the first experimental live stream.

Speaker 1 (01:40):
Yeah, hey, look, we'll see. Everything's in experiment at this point,
for better or for worse. Sure, So I want to
start this off with a timeline, Okay. On the twenty seventh,
Donald Trump gets on truth Social and says that Claude
is a radical left woke company. Okay, yeah, out of nowhere,
People like, what why is he saying this? Pete heg

(02:02):
Saith gets on Twitter and says that Claude Anthropic, the
company that makes Claude, is a supply chain risk, which
most people have never thought about what that even means.
The next day, Open Ai, who makes chat GPT, announces
that they're working with the Department of War, and then
Donald Trump announces that we're bombing Iran. As I say

(02:23):
that that sounds like a completely unhinged sequence of events.
Please tell me where I'm wrong, because I must be
missing something.

Speaker 2 (02:32):
I mean, I think I think there's a lot that's
that's right in there and a lot that's unhinged. But
I mean to sort of set the scene a little bit,
you know. So this goes back. So the use of
Claude in classified military systems has its origins last year,
last summer, Anthropic was the first to get involved in

(02:53):
working with the military. You know, they've pursued a very
business focus, and I think that sort of led them
to focus on the d D, which is like one
of the biggest businesses in the US. So they ended
up supplying Claude their model, a version of that which
runs on the Pentagon's own systems for military use on
classified systems. In many ways, this is totally remarkable because

(03:16):
just a few years ago, everybody in the AI industry
was saying, one thing we don't want to do is
work with the military, and people were staging, you know,
walkouts over this, right, So it shows you how the
tech world has really pivoted to that. What happened more
recently was that its themes and you know, this is
somewhat disputed but there's been reports that when President Maduro

(03:38):
was captured, Claude was used, and somebody at Anthropic may
have raised questions about that. This seems like it snowballed.
And one of the carve outs in the original contract
signed last year there were two. One was against mass
surveillance of US citizen and the others was that the
model shouldn't be used to build fully autonomous weapons. And

(04:01):
the DoD or W said that they wanted to change
the contract essentially, which is an unusual thing to do,
but they wanted to allow all your lawful use. And
to some degree, this dispute seems like it's a little
bit like people, you know, each side talking past each
other to some degree. And I think that the DOW
the Department of War became very concerned about a tech

(04:24):
company saying how it could and couldn't use the technology.
And to be fair, that is quite an unusual idea, right.
It's like somebody selling a missile or a jet fighter
to the Pentagon and then later on saying we don't
want you to use it in this way or that way.
But then again, ai is is a really different technology,
is very new, it's very experimental and I believe anthropic,
especially among AI companies, it's very focused on safety and

(04:46):
they have concerns just about how it could end up
being used.

Speaker 1 (04:50):
Yeah, well, so let's jump into this so anthropic you know,
this is a statement that is put on anthropic dot com.
So this is their own, you know, pr statement here
and the title is statement from Daria Ama Day on
our discussions with the Department of War, and it starts
out the first line, I believe deeply in the existential
importance of using AI to defend the United States and

(05:12):
other democracies and to defeat our autocratic adversaries. So already
out of the gate, that's kind of an interesting thing
for again a company in Silicon Valley to be saying,
even just in the past couple of years, really, but
it seems like the two big points of contention here,
there's two. He says, using these systems for mass domestic

(05:34):
surveillance is incompatible with democratic values. So spying out other people,
we're okay with that. Spying on people in the US,
We're not okay with that. Right, And then the next
thing is, quote frontier AI systems are simply not reliable
enough to power fully autonomous weapons both of those sound
totally okay and reasonable. Not only do they sound okay

(05:59):
and reasonable, but I mean, look, I've spent a whole
lot of time around people on the pretty far rate
and you know, for work, right, and they will tell
you I don't want no government spying on me, and
I don't trust these computers. Right, where's the beef? What's
the problem here?

Speaker 2 (06:18):
That's a great question. You know. I've spent a lot
of time talking to people in the Pentagon and military
defense companies, and nobody believes that those frontier models are
ready were even remotely near ready being used in autonomous weapons,
And consistently the government said we have no desire to
do either of those things. I think the way what

(06:39):
it really came down to was that they didn't just
you know, take a knee and agree to change the
terms of their contract. And as you say, you know,
it culminated in Trump and Hegxeth and others putting out
these very very you know, as often happens, they were
sort of classifying them all of a sudden as the

(06:59):
radical left, which is it's pretty incredible for company supplying
technology that's working on classified US systems.

Speaker 1 (07:06):
Right, So this is the post that Donald Trump made
on his social media network Truth Social and this I'm
reading from the top, and I'm reading verbatim. The United
States of America will never allow a radical left woke
company to dictate how a great military fights and wins wars.
That decision belongs to your commander in chief and the

(07:28):
tremendous leaders I appoint to run are military. And then
he finally gets to what he's talking about. He says,
the left wing nut jobs Anthropic have made a disastrous
mistake trying to strong arm the Department of War and
force them to obey their terms of service instead of
our constitution.

Speaker 3 (07:44):
Yeah.

Speaker 2 (07:45):
I mean, like a lot of that doesn't really even
make sense. I think Anthropic is not a radical left
organization by a stretch. It's since come out that that
should The DoD was negotiating with that for they were
trying to get this war odding kind of massalage into
a format that everybody was happy with. And it doesn't
seem like that would have been impossible. It feels like

(08:07):
some people within the administration lost patients, And I think
a lot of it came down to just power and
the Department of Defense and the leadership not being able
to handle being denied that by a tech company that
they could very easily see as as you know, part
of this sort of you know, coastal elite and in
the case of Anthropic, woke and liberal. I mean, it's

(08:30):
another reflection of how AI is just is it sort
of also a cultural lightning rod, right, it becomes suddenly
it becomes this kind of left versus right. So, I mean,
you know that that post kind of does unfortunately sum
up the entire situation, which is slightly bewildering if you
know that now they've they've taken the action to try

(08:52):
and to classify, as you said, Anthropic a supply chain risk,
which is something that's only ever been to foreign nations.
And we're doing this to one of the foremost AI
companies in the United States, one of them I supposedly,
you know, should be our national champions. It it doesn't
sound like to me they're really promoting American technology. If

(09:13):
you if you're too keen to do that, I think
you don't. You know, it's a real danger you're going
to harm some of you know, this this nascent, very
important American company also maybe dissuade other companies from working
with the government or the Pentagon if they feel that
that's going to happen.

Speaker 1 (09:27):
Right, Yeah, So the supply chain risk thing again that
that's the language that Pete Hegseth used. What does it
mean to call anthropic a supply chain risk? What does
that even mean? Yeah?

Speaker 2 (09:42):
I mean, you know, well, the first thing is to
say that this supply chain risk designation is incredibly unusual, extraordinary,
and just the kind of thing that I think the
sort of left field thinking that we're become accustomed to
with the administration because it's usually applied to, you know,
a company that you want to remove from the US

(10:02):
supply chain because you're worried that their software or their
hardware might contain backdoors, might be a threat to national
security because it could be used by an adversary. So
this is the very sort of all or nothing. You
either agree with everything we say, or you are the enemy.
I mean, that's the kind of tenor which sounds familiar
to me generally. Often the most extreme version of that designation,

(10:26):
and we haven't seen the full details of it because
they haven't produced it yet, but would mean that any
company who's working with the d id couldn't use your technology,
which is a vast I mean, Amazon, which has invested
billions in Anthropy, has massive contracts with the DoD so
it's an enormous thing to designate them. It's also doesn't
really make sense because you know, they were also saying
we may invoke the Defense Production Act, which has been

(10:48):
used in wartimes or in actually also during the Cold
War to force companies to prioritize production and supply because
what they were making is so important. So it's it's wait,
I think this may be what, Yeah, this may be
a legal problem that the d ID has, and I
think you know Anthropic is going to as a signal

(11:09):
that's going to take some legal action, So you can't
it's difficult to argue. On one hand, it's so vital
we may invoke the Defense Production Act, but then we
may also designate you a supply chain risk.

Speaker 1 (11:20):
Yeah, like you're too dangerous for us to use, but
you're so useful that we want to use your product.

Speaker 2 (11:28):
But we want to force you to supply it big one.

Speaker 1 (11:32):
Yeah, like nobody can have it, but we want it. Okay,
what are we doing here? So basically the next day
open Ai signs something with the government. How does that

(11:54):
sequence of events? Like, what kind of sense are you
able to make of that?

Speaker 2 (11:58):
Yeah, it is very interesting that anthropics chief rival. And
quick reminder that Anthropic was founded by people who left
OpenAI because they said that company was not developing AI
with enough focus on safety. And they're really really bitter rivals.
And as it was going on, Sam al Mun was

(12:18):
working through an alternative deal. The deal that they signed
seems extremely similar to the one that Anthropic we're trying
to hash out, has similar carve outs, so it's sort
of doesn't actually carve them out, but it specifies that
those things that massurbillance is unlawful and that nobody wants
to build autonomous weapons.

Speaker 1 (12:37):
So are the details or the terms of the contract
that OpenAI signed with the Department of War, are they
materially different from what Anthropic did? Because this is from CNBC,
Altman said, and I want to read this here quote.
I believe that we will hopefully have the best models

(12:57):
that were encouraged the government to be willing to work
with us, even if our safety stack annoys them. But
there will be at least one other actor, which I
assume will be x Ai, which effectively will say, will
do whatever you want. So the vibe here seems to
be like, okay, well listen, at least worthy adults in
the room. They're gonna let Grop do whatever Groc wants

(13:19):
to do, whatever, whatever Trump wants to do. They're just
gonna let it do it, drop bombs wherever. At least
we're going to be some adults in the room. But
is Opening Eye doing something fundamentally different. Did they seed
more grounded than Anthropic did or is it the same thing?

Speaker 2 (13:35):
You know, Yeah, there's a lot going on there. I
think that they they did see slightly more ground in
that they basically agreed to a contract that specified all
lawful use. They changed the contractors make Oh but you know,
to be clear, here's what's what is lawful and what
the deity doesn't want to do. But I think there
isn't a huge amount of difference. I mean, I think

(13:56):
what you're seeing with that that internal statement was really,
you know, Sam open Ai willing to take a bit
of flat because it's just incredibly important to get a
win over Anthropic, to be seen to be very on
board with the US government because you want more and
more involvement, more and more contracts and they've since you know,

(14:17):
open a I since got quite a lot of blowback.

Speaker 1 (14:20):
Yeah, well, let's talk about the blowback here. I mean
this is on tech crunch us app. Uninstalls of chat
gbt's mobile app went up almost three hundred per so
people were uninstalling chat gbt and started installing Claude Anthropics Claude.

(14:40):
And so there's a weird way in which there's this
and I really hesitate to call this a game, but
there's there's a weird, really weird way in which there's
this game being played over who gets to be the
contractor for the Department of War. But then also we're
still fighting for consumers here.

Speaker 2 (15:00):
Definitely, And it is interesting. It's the sort of perception
of who's the good guy and you had, you know,
Katie Perry was subscribing to claud pro and posting about
it and this, you know, the sort of interesting cultural moment.
And I think the thing that was that struck me
was like, the people really know how Claude is being
used by the military, Do they know what Anthropic was

(15:23):
willing to do? I think it's gotten a little oversimplified
to my mind, and I think that one of the
risks there is that we ought to be rather necessarily
uninstalling something or posting about who you think is best,
really pushing for more of an explanation about how the
technology is being used and developed and tested, like what
is it mean that it's used on classified systems? There

(15:45):
should be more accountability there. And the fact of the
matter is there is nothing legally to putting the issue
of the bonus to once nothing prohibiting the development of
autonomous weapons, and there isn't really a legal framework around
that sort. If they, you know, civilians were affected by
fully autonomous weapons, you suddenly have a situation where nobody
is legally accountable. And so these are things that really

(16:06):
need to be figured out. And that doesn't mean a
fully autonomous web that could be your automated part of
the so called kill chain and so decisions were kind
of automated away. I think that is really important to
have more focus on this and to have more discussion
of it. And you know, one of the reasons, honestly,
as someone who's looked at the DoD and the work

(16:27):
going on around autonomy, is that unfortunately it is the
case that we're just going to have more and more autonomy,
and most analysts expect that they're going to be swarms
of drones attacking systems, and the truth is you just
won't be able to have humans in the loop always.
So this is going to become a really big issue.
And it's not necessarily about anthropics models in those situations,
but the question of who's accountable, how do we know

(16:50):
that the systems are reliable? That is going to become
more and more important. So I think it's good that
the situation is getting some attention. I do think that
the positive thing is that it is important for people
who understand this technology really really well. Do you maybe
have a role in at least influencing how it's used.
And that doesn't mean, you know, like you get to
say yes or no to a mission. It just means

(17:10):
perhaps that you say, have some input on how you
ought to try and deploy it within the Pentagon I.

Speaker 1 (17:17):
Think, I mean, how does that functionally work? Though maybe
that's the question here, That is the question, Yeah, can
you have a contract with the government and tell them no,
you can't do this? And I'm asking that partially because
certainly there is a lot of talk that will wait
a second, The iron Strake. Was Claude used there.

Speaker 2 (17:38):
Yeah. To answer the first question, I think that is
a really good question. How do companies, how does the
public understand how things are developed, what the rules are
around them. And there are specified rules around autonomous weapons,
around surveillance, but this is also a technology that's different,
that can misbehave in surprising ways and can be kind

(17:58):
vulnerable in surprising ways. So I mean that it needs
a bit more scrutiny. The Washington Posters has reported. I
think it's that it seems very likely true that Claude
was absolutely used in the Iran operation as part of
this system called Maize and which was built by Palanteer,
which includes a bunch of things, not just a language model.
It can do you know, analyzed maps and images and

(18:21):
mission plan and so on. But yeah, as part of that.
And so now, one of the things that was kind
of crazy with Trump's initial statement was that he said,
I'm ordering everybody to stop working with Anthropic because they're
all these terrible people. But there'll be a six month
period of what we carry on doing. Right, it can't
be that bad, and we're going to use We're about

(18:42):
to use them in this you know, incredible operation in
the Middle East.

Speaker 1 (18:47):
So yeah, I mean we're laughing. It's terrible. I mean
it's ridiculous and also terrible. I suppose it can be
two things at the same time. And because we're doing
this live, I wanted to take a few questions. And
one of the questions that I got actually previously I
posted on Instagram does anybody have any questions? And somebody posted,

(19:09):
and I think they were kind of joking, but they said, hey,
can we just make all the country's Ais fight against
each other? Can we just have them fight instead of
the real war. But it's kind of funny that they
say that because there's I don't know if you've seen
this article in the New Scientist, but basically different AIS,

(19:30):
and the headline here kind of says it, all AIS
can't stop recommending nuclear strikes in wargames simulations, and the
subhead leading AIS from Open Ai, Anthropic and Google opted
to use nuclear weapons in simulated wargames in ninety five
percent of cases.

Speaker 2 (19:48):
Right, Yeah, So I read that paper. It's important to
know that this person was just trying to understand what
the kind of behavior of models would be. Well, I'm
not advocating to do that, and I don't think anybody.
I think actually this is one of the like not
letting AI have a role in nuclear weapon decision making
is one thing that the US and China have actually

(20:09):
agreed on, which is extremely unusual these days.

Speaker 1 (20:11):
Right.

Speaker 2 (20:11):
So, but it's unsurprising to me that if you have
a model trade on the entire Internet, it's going to
tend toward escalation and not be very very expert in
how to avoid you know, it's not going to be
someone who spent their life worrying about that sort of thing.
And on the question of whether ais could fight each other,

(20:33):
the truth is, like you are already seeing in certain conflicts,
you know, autonomous systems, semi autonomous systems are remotely operated
fighting other semi autonomous systems.

Speaker 1 (20:44):
Right.

Speaker 2 (20:44):
And some people may argue that that would remove people
from harm's way. That is not the reality of a conflict.
There's going to be plenty of people in harm's way.
And the question is, you know, how do you do
that in ways that are reliable and don't make mistakes.
And there has been talking in recent years of these
sort of things like existential risks, but long before that,

(21:04):
there are lots of systemic dangers of using this technology
in any critical system. So you know, just as you
have hallucinations if you're using a chat, or just as
it gets simple reasoning wrong in a very convincing way,
that is what we should be worried about. I think
in how this and other models language models are used
by the military, by the Pentagon.

Speaker 1 (21:25):
You know what.

Speaker 2 (21:25):
I once talked to the excuse and naval officer in
the US Navy whose job it was to, you know,
be in command of these nuclear submarines, and he said
one of the first things he did when he got
the job was get permission to go and talk to
his counterpart in China, go meet him and spend time there.
The reason being because if something went wrong, if one

(21:45):
of the systems went wrong with some a person made
a mistake, you know, or say there's a collision between
two vessels, and you know, you've a risk of escalation.
He wants to be able to pick up the phone
and talk to someone he knows. He doesn't want that
to be the first time he's spoken to them, and
so that sort of shows the importance of the human element.
And that's that's not something that those we talk about

(22:07):
a g I and intelligence being sold. But that is
a very human thing that no language want to really,
they don't have that kind of human human intelligence, right.

Speaker 1 (22:17):
Yeah, chat GBT can't take groc out for coffee.

Speaker 2 (22:20):
Yeah No, well no not yet. I mean, that's a
frightening thought, it is. I don't think anybody would want
to take rock count.

Speaker 3 (22:26):
Yeah.

Speaker 1 (22:39):
So there's a question also here in the chat that
I want to get to, so b masters asking saying
they read some sources that say that Anthropic was actually
willing to give in to the Department of Wars demands,
but the communication broke down. The CEO just doubled down
on the ethical argument, just as a pr move. I mean,
what your reporting have you? Have you seen anything that

(23:01):
would substantiate that.

Speaker 2 (23:02):
I couldn't tell you definitively if they were about to
cave to the d d's terms. I have heard that
they were very close to reaching an agreement which seemed
like sounded to me like a compromise between the two
and it wasn't very far off. I think it's gonna
it's gonna be too easy to get caught up in
the drama of these companies in there and missed the

(23:24):
bigger picture. Which is we'll hang on. Maybe we should
have more of a discussion about how those things are
you what are the benefits, what are the It's important
to say that I don't think you should not use
AI in defense. It's all about how reliably you can
do it, What are the limits? What accountability is that?
How how does citizens sort of know how that's being
used in their name?

Speaker 3 (23:44):
Right?

Speaker 2 (23:44):
And it you know, Trump wrote in that that message
that he decides how wars are fought, and in some way,
right he decides when we invadeer on, but he doesn't
decide the terms of war, like you know, what is
a war crime? That is decided by society at large.
And that's the sort of a bigger thing that we
to be asking these questions when it comes to the

(24:05):
new capabilities that the AI is going to give governments,
which is they would be able to do new kinds
of massurveilance you could never you know, never imagine before. Right,
you can be able to find patterns you couldn't think
of it, or you'll be able to sort of automate
that much more so, I think, yeah, that's the bigger
thing for me, and I think I should remember that
you know, real life, real human beings at stake here

(24:27):
and that war as hell, and it is something you
want to try and avoid.

Speaker 1 (24:31):
Right, Yeah, you were talking about the bigger picture, and
also you mentioned palents here, which, as it happens, I
just a little bit ago talk to mechanic Kelly, your
colleague about talents here, and we actually have an episode
about tech drop in next week. But this kind of
seems to be part of a larger trend, like you said,

(24:52):
big tech companies who were very interested in working with
the government, but specifically working with the government in military capacities,
which feels new and in some ways it's not. Of course,
in some ways it's not. Some of the origins of
things like the Internet, you know, does come from the military.
But in terms of the way that we think about

(25:15):
the culture of Silicon Valley, I think this turn has
been a little bit unsettling but also surprising for a
lot of people. But it does to be a trend
right now. Where do you see this going, say, even
over the next year or so.

Speaker 2 (25:31):
I mean, that's a great question. That is a really
interesting question. It's hard to imagine ALI companies becoming disentangled
because they are is so strategically important, and the incentive
for working with the government is enormous. Right, The amount
of data, the amount of compute, the amount of money
that the government has to throw out those companies is

(25:51):
so huge that it would be surprising to me if
everybody said, our conscience is not clear, so we're going
to rein that in So if you imagine that we're
going to have more and more tension, more more conflict,
there will be pressures on them too, for everybody to
use that AI and new ways that might be concerning,

(26:12):
besides the use in military situations, which I think we
should have try and have transparency on. I think it
is really telling that Anthropic had this concern about mass
surveillance because I think language models are preternaturally brilliant at
passing large amounts of text and audio and making sense

(26:35):
of it, finding patterns and making predictions like we haven't
done anything like that, and that's exactly what you might
want to do with in surveillance. So the temptation is
going to be huge, and yeah, I think it will.
I think it will be very interesting to see if
this turns into more of a pushback if more people.
One thing is that tech workers have a ridiculous I

(26:57):
mean especially AI engineers, right has an incredible amount of
cloud these days because they're so valuable. And I think
this is you know, one reason why Sam Alman and
others are going to extraordinary lengths to try and explain
and to a piece their their staff and maybe change
even change tax because of that.

Speaker 1 (27:15):
Yeah, well, there were there were a bunch of workers,
I mean hundreds of workers from open Ai and Google
who signed an open letter basically supporting anthropic. There is
something of a little bit of pushback, if I can
call it that, from inside of these companies, and even
you know, at the consumer angle, people uninstalling chat ubt

(27:37):
and installing Claude. The question, though, becomes, are both of
those things enough? Is the pushback from inside the company?
Enough is a bunch of people deciding that, Hey, I'm
gonna uninstall open Ai and use the woke claud which
I think we've established is is maybe not that maybe
not as woke as some people might have be given
it credit for. What does that do? Does that anything

(28:00):
move the needle?

Speaker 2 (28:02):
I don't think it does at this stage. I think
the question is does it go from hundreds of workers
to thousands and does the pushback from the public, which
already seems to be fading somewhat, does that turn into
something maybe with further incidents that has a meaningful impact
on bottom line, you know. And it is sort of
hard these days, right for people. It's harder for people

(28:24):
to push back and draw lines when it comes to
the government. It seems like everybody's a bit more curtailed
and scared, right, But at some point that you know,
as we've seen with some of the protests, that there
is a sort of breaking point, and I think it
needs to be a bigger pushback. But I think it
also we probably should have more nuance around what that

(28:45):
pushback here it really is, you know, just saying anthropic
is better. It's actually maybe like maybe everybody should get
together and say all these companies should be saying, hey,
we're not totally sure this stuff is ready even for
half the use cases whatever it might be like, and
then have the government explain to people how it's being mean.
It's one of the things that America can do, you know,

(29:07):
is pushes political leaders in theory to do that. So
I would rewind hope.

Speaker 1 (29:12):
For well, thank you so much for coming through and
talking about this. It's heavy stuff, but we got to
talk about it. Thank you for real.

Speaker 2 (29:19):
You're very welcome, ye, thanks for having me.

Speaker 1 (29:24):
Thank you so much for listening to another episode of
kill Switch. You can email us if you want to
talk at kill Switch at Kaleidoscope dot NYC or on
Instagram or at kill switch Pod, and if you like
what you're hearing, maybe leave us a review. It helps
other people find the show, which helps us keep doing
our thing. And as I said at the beginning of
the show, kill Switch is on YouTube, so if you

(29:45):
want to catch the next one live, the link for
that and everything else is in the show notes. Kill
Switch is hosted by Me Dexter Thomas. It's produced by
Sena Ozaki, Darluck Potts, and Julia Nutter. Our theme song
is by me and Kyle Murdoch from Kaleidoscope. Our executive
producers are Ozma Lashin, mungesh Hati Kadur, and Kate Osborne

(30:06):
from iHeart. Our executive producers are Katrina Norvil and Nikki Ecord.
Catch on the

Speaker 3 (30:11):
Next one, Goodbye,

kill switch News

Advertise With Us

Follow Us On

Hosts And Creators

Oz Woloshyn

Oz Woloshyn

Karah Preiss

Karah Preiss

Show Links

About

Popular Podcasts

Stuff You Should Know

Stuff You Should Know

If you've ever wanted to know about champagne, satanism, the Stonewall Uprising, chaos theory, LSD, El Nino, true crime and Rosa Parks, then look no further. Josh and Chuck have you covered.

The Clifford Show

The Clifford Show

The Clifford Show with Clifford Taylor IV blends humor, culture, and behind-the-scenes sports talk with real conversations featuring athletes, creators, and personalities—spotlighting the grind, the growth, and the opportunities shaping the next generation of sports and culture.

Music, radio and podcasts, all free. Listen online or download the iHeart App.

Connect

© 2026 iHeartMedia, Inc.

  • Help
  • Privacy Policy
  • Terms of Use
  • AdChoicesAd Choices