All Episodes

October 10, 2025 29 mins

In this episode, Ed Zitron is joined by writer Brian Merchant to talk about US AI regulation, and how governments never seem to make the effort to understand technology.

https://www.bloodinthemachine.com/
https://www.bloodinthemachine.com/p/were-about-to-find-out-if-silicon

Get $10 off a year’s subscription to my premium newsletter: https://edzitronswheresyouredatghostio.outpost.pub/public/promo-subscription/w08jbm4jwg

YOU CAN NOW BUY BETTER OFFLINE MERCH! Go to https://cottonbureau.com/people/better-offline and use code FREE99 for free shipping on orders of $99 or more.

---

LINKS: https://www.tinyurl.com/betterofflinelinks

Newsletter: https://www.wheresyoured.at/

Reddit: https://www.reddit.com/r/BetterOffline/ 

Discord: chat.wheresyoured.at

Ed's Socials:

https://twitter.com/edzitron

https://www.instagram.com/edzitron

See omnystudio.com/listener for privacy information.

Mark as Played
Transcript

Episode Transcript

Available transcripts are automatically generated. Complete accuracy is not guaranteed.
Speaker 1 (00:02):
Also media pin Cooin.

Speaker 2 (00:05):
Welcome to the monologue this week, except it's a duo logue.
There's two of us. Better offline. I'm your host ed
Zetron and today I'm joined by Brian Merchant, the author
and writer of the book and newsletter Blood in the Machine.

Speaker 1 (00:28):
Brian, thank you so much for joining me. Ah, thanks
for having me. Good to be back.

Speaker 2 (00:32):
So today we're talking about AI regulation, the current state
of well California's regulatory moves and indeed the very confusing situation.

Speaker 1 (00:41):
And I think we can.

Speaker 2 (00:42):
Start with this, actually this AI safety bill that was
vetoed and then just got signed.

Speaker 1 (00:46):
What happened there?

Speaker 3 (00:48):
Yeah, I mean the long story short is that, you know,
the people who are actually worried about catastrophic risk, right
like the aidomer crowd, the people who think that like
this is the number one thing to worry about with AI. Uh,
they all got together last year uh and and wrote
this bill last year. It was ten forty seven, and

(01:14):
it was you know, again, assuming that you are worried
about that kind of thing, like this was like a
legitimate stab at trying to keep the companies honest.

Speaker 1 (01:23):
It did things like mandated.

Speaker 3 (01:26):
Like third party audits of the systems to make sure
that they weren't you know, they weren't too risky, or
that they weren't you know, including information about biological weapons
or whatever, and teaching people how to But the AI
companies hated it because it would it mandated some sharing

(01:47):
of information, some transparency. It mandated no, no, no, no, no, no, no, no, no, no,
not even a little bit even this, even like the
safe AI companies like anthrop Anthropic, Yeah, yeah, even those
guys they were in, they were against.

Speaker 1 (02:03):
So everybody was against it.

Speaker 3 (02:04):
It passed because Scott Wiener, the sponsor, is a fairly
prominent and powerful legislator. But even though it passed both
houses and was sitting on Newsom's desk, the industry Silicon
Valley got Newsom to veto it. Basically, so Newsom vetoed it,
and then he said, let's try it again next year.

(02:25):
So Scott Wiener got together with a bunch of you know,
AI ethics people and some and the.

Speaker 1 (02:33):
Yeah.

Speaker 3 (02:33):
And then so this year he comes back with like
an even like a watered down version of even that
to pro again the aim is to prevent catastrophic risk
and yes, and it had he just signed this bill
so he as we're recording this, you know, he signed it.

Speaker 1 (02:52):
A few days ago.

Speaker 3 (02:53):
Uh, it's it's on the it's on the books, and
it's people are like celebrating this thing for some reason.

Speaker 1 (03:01):
And I'm like, I'm really scratching my idea. Why you
not sell it? Okay, So what's what's wrong with it? Okay,
here's what the bill actually does.

Speaker 3 (03:08):
It does four things basically, instead of you know, actually
mandating actual transparency and sharing data with auditors, things that
could actually might you know, maybe be considered a prescription
if you're worried about things like catastrophic risk. One, the
new the new bill has enforces the tech companies to

(03:29):
publish privacy protocols on a website.

Speaker 1 (03:33):
Say yeah, yeah.

Speaker 3 (03:36):
Yes, it says, this is what we're doing to ensure
and our security practices. Right, this is what we're going
to do to keep you safe from catastrophic AI.

Speaker 1 (03:50):
Yeah you know, check up policy.

Speaker 2 (03:53):
No no no, And then you need a page.

Speaker 3 (03:57):
You need a page that people can see. No enforcement
that'll stop it. And then you and then if something
does go wrong and the companies have done something unsafe
in there with their l ll ms, they need to
report it to the States. So they it's it's an
honor system, so they have.

Speaker 2 (04:16):
To tell and what happens if they don't report it unclear?

Speaker 1 (04:21):
And say right stuff unclear?

Speaker 3 (04:25):
Uh So the like the one thing that you could
argue are two things. Again like in theory, these are
fine things that it says the other limb, Well, these
are those first two things are they're a limp. I
think they're a complete joke, and so that. But Part
three and four is whistleblower protections, which you're supposed to

(04:47):
already supposed to have. Yeah, right, good to formalize, but yeah, yeah,
it's just like kind of like underlining it with a
sharpie saying, okay, people should be able to blow the
WHI so if something catastrophic is happening, I have very
little faith this will meaningfully change anything at all.

Speaker 2 (05:07):
I guess there are no like protections actually in that like,
is there anything that specifies.

Speaker 3 (05:12):
There's some language that does raise the threshold again theoretically,
but like you just have to think about how whistleblowers
are already treated and how difficult it is for them
to come forward with all the NDAs and just sort
of the norms in the industry. So like theoretically, you know,
but again, and it's supposed to be for four workers

(05:32):
who are so concerned about catastrophic risk, not about you know,
you know, if they see a company doing fraud or whatever.

Speaker 1 (05:40):
This is for a catastrophic risk.

Speaker 3 (05:42):
So really no real protections, no teeth, really, no teeth.
I was reading this thing and I'm not seeing really
any any enforcement mechanisms other than you know, these companies
get a slap on the wrist and maybe it's a
little bit slappier if it comes.

Speaker 2 (05:59):
But doesn't sound like there are actual remedies or anything
like that.

Speaker 3 (06:02):
Not really, No, there's a finally, the fourth thing that
it does again in spirit, this is okay, Like this
is it's a good idea that it's to start like
a public consort like called cow compute, like a public
a alternative for AI for researchers. It says like, let's
get me, but it's just a committee. It's a committee, yeah, bureaucracy.

(06:28):
It forms a consortium to discuss the initial.

Speaker 2 (06:31):
Get together that we'll discuss what might happen in the
future for another mat And I'll tell.

Speaker 1 (06:36):
You brilliant once once again.

Speaker 3 (06:37):
So my my read on this will and you'll And
I've seen a lot of statements coming out of you know,
otherwise decent sort of orgs saying like this is a
step for you know, AI safety, and I know it's not.

Speaker 1 (06:51):
It's not really.

Speaker 2 (06:52):
It doesn't even seem like it addresses AI. It doesn't
seem like it's an attempt to mitigate any homes.

Speaker 3 (07:00):
I mean, it is an attempt to you could I
think that.

Speaker 2 (07:04):
Generally, which hums yeah, the most I'm gonna guess. So
in looking at AI regulation, you have probably not seen
anything that addresses the environmental damage, anything like anything about
the stealing.

Speaker 1 (07:18):
Yeah, workplace surveillance, things like this.

Speaker 3 (07:21):
So my read on this right like, so I think again,
I think Scott Wiener and and the sort of the
group that are interest that are like the AI safety groups,
like they believe in this stuff. They really lobbied for this,
like they think that that is important to have something
better than nothing. But the effect of this mean is
largely just that now Gavin Newsom gets to say, like

(07:45):
I signed an AI bill, like I'm doing my due diligence,
and meanwhile, there are a bunch of other bills that
we're going to talk about that would actually you know,
do do something that would actually sort of meaningfully rein
in the AI companies at least a little bit or
at least point a way towards doing this.

Speaker 1 (08:01):
This is just again.

Speaker 3 (08:05):
Putting your safety policies on a website, like an honor
system to alert the state if you've done something wrong,
kind of just saying okay.

Speaker 2 (08:12):
We'll be system for the most well funded companies again.

Speaker 3 (08:15):
And here's the number one way that you can tell
that this is entirely bullshit, and that is that Anthropic
is in favor of it.

Speaker 1 (08:24):
They came out and said like this is good. Number two.

Speaker 3 (08:27):
Meta was like this is sensible, and they didn't support
the bill, but they're not opposed to it. Even Open
AI was like, well, they I think you could tell.

Speaker 1 (08:36):
Any kind of prescriction we don't like that, but even.

Speaker 3 (08:39):
If it's the flimsiest one, I feel like you could
see them like the gears.

Speaker 1 (08:44):
Turning in the corporate machine.

Speaker 3 (08:45):
And we were like, are we like, basically this bill
requires us to like have an intern like write some
copy or even just have like chat GPT generator.

Speaker 1 (08:55):
That's the thing. Put it on a website.

Speaker 2 (08:57):
They don't even have a thing in this saying you
can't use to write am I would have probably done
that myself personally.

Speaker 3 (09:07):
Yeah, so number one, it does next to nothing in
my eye, and I just can't. I mean, I know
all these like advocacy groups and people who are like
trying to get some good laws on the big They're
hungry for a win, so they want to say like, yeah,
we did something, But this is I think to me,
this is worse than nothing because it's gonna let Gavin
off the hook on like a handful of bills that

(09:29):
might actually do something, and.

Speaker 2 (09:31):
It will give the faith the fake view that the
AI companies are regulated so they were able to continue
doing all the actual harmful stuff. Are there any regulations
you've seen that actually approach the actual pomps anyway?

Speaker 1 (09:55):
Oh?

Speaker 2 (09:55):
Sorry, suggested regulations or pills or anything like that.

Speaker 3 (09:58):
Okay, yeah, there's a couple. So, and I've been talking
to I've talked to I've talked to lawmakers over this process.
I've talked to some of the sponsors of the bills,
and I've talked to labor groups. And the one sort
of through line that runs through all of this is
that Silicon Valley and its lobbyists have just been out

(10:22):
and forced trying to crush even the most sort of
basic common sense regulation, and so laws or proposals that
started out with some teeth have had most of them
knocked out or have been delayed until next session. So
we're left with a few things. We're left with a
few things that you know, I think more than anything,

(10:43):
they're just like bell Weathers as to whether or not
it's even possible to like get to get anything done. Because,
as your listeners will know, right like, the whole AI
like for enterprise isn't working out so well right now,
no one buys it, no one, No one's buying it,
no one wants it. So they're gonna have to make

(11:04):
some changes in the in the next couple of years.
And the way they're gonna my my guess, is that
they're going to try to find ways to sell people
on its other capacities, things like it's a it's a
surveillance tool.

Speaker 1 (11:17):
It'll do you know, it'll surveillance tool.

Speaker 2 (11:20):
It's like, it's not that's what's funny about this, because
it's like the actual harms to mitigate would be training
and environmental and energy, Yeah, but must go up. So
it's like, yeah, well, we'll regulate because the surveillance thing.
I get that, Like there is already a surveillance but
it's like, oh, what if they put all the date
or in a large language model.

Speaker 1 (11:40):
Yeah, would the large language model do anything with it?

Speaker 2 (11:43):
Like what do you like, We've already got them. But wait,
so what are the bills to be seen?

Speaker 3 (11:49):
Okay, so you know there's there's one, I think the
one that's still out there at the time of recording.
Who knows it could have been vetoed or signed by
the time that this goes to air.

Speaker 1 (12:02):
But there's one.

Speaker 3 (12:03):
There's there's one bill that Silicon Valley is genuinely upset
about and afraid of. And it's a bill that like
sets the very lowest bar and it says, essentially, if
you are going to make a chatbot and market it
to children, then you have to be able to demonstrate

(12:24):
that this chatbot isn't going to make them harm themselves.

Speaker 1 (12:28):
Ah. So they hate that.

Speaker 3 (12:31):
They are and they're in you know, they have there's
this Silicon Valley lobbying group that's kind of famous in
California especially, called the Chamber of Progress.

Speaker 1 (12:41):
It's like, yeah, it's.

Speaker 3 (12:45):
Like immediate reflexive immediately, I know, gag reflex immediately. So
they they've got this guy who's out there writing op
eds in like the San Diego Tribune and doing press
for going Oh it's over broad, And let me tell
like their actual line on this is, if this bill passes,
then you're gonna be taking away AI that could educate children.

(13:08):
You're gonna be taking away AI from children, and they're
not gonna have the same advantages that children with AI have.
And they're running this like this big Facebook campaign. They've
hired lobbyists specifically to take They often do this.

Speaker 1 (13:24):
It's fucking evil people. They are evil people.

Speaker 3 (13:27):
I mean this, I mean, for me, this was like,
you know, this was like it's past the threshold once
the Adam Raine stuff broke and and open AI is uh,
you know, trying to hem in Hawe about you know, oh,
well you know we're going to do this or that
it's we passed it, we passed the rubicon, right There's
they've got chatbots that are that are telling children to

(13:49):
you know, hide the news so that their parents don't
see it.

Speaker 1 (13:52):
And like it's again.

Speaker 3 (13:54):
They make it seem like, oh, AI is this frontier
We're gonna work something. It's a it's a US product,
it's a software product.

Speaker 1 (14:01):
But yeah, go ahead.

Speaker 2 (14:03):
I have a theory, Yeah theory. Okay, So I don't
think they can control these models. I don't mean because
they're intelligent. I don't mean because they're autonomous. I mean
I don't think you can actually prompt a large language
buddle to categorically stop it doing something.

Speaker 1 (14:19):
Yeah. I don't think then you should not be selling
that that children. Yeah, yeah, one degree. I'm just saying that. Yeah,
I don't know. I think you're capable, because I think
you're right.

Speaker 2 (14:30):
But my theory is based on costs because of Claude
code that they can't that they can't do cost control.
If they can't do cost control, it means the model
won't listen. Yeah, and I reckon that they can't be
like never talk about killing yourself. Yeah, Like they just
can't do that.

Speaker 3 (14:45):
Yeah, or it would require like going back through you know,
like there's been I've seen a lot of sort of
speculation that the reason that it's talking like this is
that it's like a lot of the language is coming
from like pro suicide forums in the bowels of the internet.
So like they don't want to go through and take
the time discussions sort that app yeah, or discussions of suicide.

Speaker 2 (15:06):
Probably articles as well that say, have to deal with
someone who's doing just horrible stuff. Yeah, really fucking pep.
And this is this a California bill.

Speaker 3 (15:15):
Yeah, it's a California bill. Yeah, and it's and it
is Yeah. So you know, if you're on Facebook there
so that there's a there's the front group spun up
by by some of the VC firms like A sixteen
Z and you know, Andresen and these guys and and
Y Combinator. Then there's a front group called the American

(15:35):
Innovators Network, and they're running all these ads arguing that
this bill, again whose sole purpose is to ban chatbots
from being marketed to children, that that also try to
convince them to harm themselves.

Speaker 2 (15:51):
Not even banning them from being marketing marketed to children.

Speaker 1 (15:54):
Just you have to them being harmful, you have to.

Speaker 3 (15:57):
Yeah, it's the way that it's free, which is what
they're all hung up on is it's just like you
have to be able to demonstrate that if you sell
this to children, it.

Speaker 1 (16:04):
Will not tell them to harm themselves. And and I think.

Speaker 2 (16:07):
You've one of the more reasonable requests you could ever
ask of a company.

Speaker 3 (16:11):
Most I cannot think of something that is more. But
they're saying, oh no, it's too broad.

Speaker 1 (16:16):
Everything else will get caught up in this.

Speaker 3 (16:17):
You can't have a chatbot because but and then it's
always like but why why couldn't we have this chatbot
in the classom? Yeah, whoa because it might tell somebody
to kill themselves, like that's.

Speaker 1 (16:26):
Why, yeah, they don't. This is again we should these.

Speaker 2 (16:29):
People should be should try and interview them, like, because
it's the question would be say that you agreed with
this bill, you disagree, say you agree with it?

Speaker 1 (16:39):
How would you stop this? Yeah? Just how would you
stop it? Can you stop them? Is that what you're
upset is?

Speaker 2 (16:46):
The is the reason you're mad because you literally can't
stop this. Yeah, because that's the thing. They have mahar
around any kind of protections, yeah, anything, anything, and now
and I wonder if it's because they they can't and
they want to be like, oh it's too powerful, No,
it's too shit.

Speaker 1 (17:06):
Yeah, it's because it sucks. It's not because it's it's
not because powerful.

Speaker 2 (17:11):
It's because you built something shitty and hard to control.

Speaker 1 (17:14):
Exactly. It's shitty LoVa.

Speaker 2 (17:16):
LoVa is hot that and can burn through most things.
That doesn't make it intelligent your inability to not drink
lava just like fucking.

Speaker 1 (17:25):
Fucking I mean, it's a combination of both those things.

Speaker 3 (17:29):
Like like it's either oh, this would be really expensive to.

Speaker 1 (17:33):
Fix or oh, like I don't know if we can't
I reckon.

Speaker 2 (17:37):
What they would have to do is they would have
to just neuter it.

Speaker 1 (17:40):
They would just have to They would have to just make.

Speaker 2 (17:42):
It so that anything that gets even close to that
conversation would have to be just like shut down to
the point that you can't even talk about superhero stuff.
They would probably just limit it to the point of nothingness.

Speaker 3 (17:55):
Yeah, which like yeah, I mean if that's what they
I mean, we're again we're talking about like children here,
so like yes, that seems though, like if that's what
they have to do, then that, in my anyways, is
something that.

Speaker 1 (18:08):
They should do.

Speaker 3 (18:24):
But again it's wild because like they don't usually hire
lobbyists to oppose state level bills. But the sponsor here,
Rebecca Bauer Khan, it's it's the lead act, is what
it's called. It's AB ten sixty four. She's just like
I've never seen anything like this. They're like having lobbyists
come and knock on my door and like, you know,

(18:44):
yelling at me about this, and they're just like it's
a full court press. And we could talk a little
bit after this about like how this is sort of
like part of a broader movement where they're you know,
Silicon Valley is sending its lobbyists out all across the country,
but especially so typical to do on this level. I mean,

(19:08):
Silicon Valley hasn't wanted regulation on anything for a long time,
but this, the level of concentrated effort and sort of
the campaigning is new, right, Like, I've been a tech
journalist for fifteen years and I've never seen anything like
we saw over the summer where they were Silicon Valley
tried to lobby for a ban on all state level

(19:30):
AI lawmaking. They really got a whole sort of united
front together, they got stakeholders from the different fail though
they did fail, but they failed by basically one vote,
and they failed because of a Republican in Tennessee who
represents Nashville. Yeah, and she was like, wait a minute,

(19:53):
we have a lot in the books that protects our
country music industry from being slopified and WITHO overturn it,
and they were like, well yeah, but you know yeah yeah,
and then so she kind of came out against it
and that if it wasn't for that, it would have
that well, it would have passed.

Speaker 1 (20:09):
And that is bonkers. It's bonkers.

Speaker 2 (20:11):
That's while the country music saved us.

Speaker 3 (20:13):
Yeah, country music and Nashville's like, no AI music slop law.
And they're they're gonna try again, though, They're gonna try again.
Ted Yeah, Ted Cruz keeps talking about it. He's he's
he's interested in giving it another go. So it'll be
coming down the pike. But to be clear, like, no,
this has never happened. If you've never seen somebody like say, okay,

(20:34):
we're gonna ban lawmaking around search. No search engines can
have any laws made around them or social networks.

Speaker 1 (20:41):
This is its own thing, This is AI thing. Yeah,
they just can't be lows.

Speaker 3 (20:46):
No, it's totally anti democratic, it's totally absurd and all
I mean, it tells you all you need to know
really that the Silicon Valley interests are willing to push
all their chips in and team up with with the
Trump administration and its allies to try to get this done.
I mean, I don't know, fucking use no, he's I mean,
Newsome has has vetoed a ton of solid AI bill.

(21:09):
I mean last year he knew some vetoed a good
bill that I think was good because.

Speaker 1 (21:15):
It was it was, it was on. So I don't
know if you if you caught this.

Speaker 3 (21:19):
But last year the teamsters and some pro labor groups
UH fought for a bill and got a pass bipartisan
past the House that says, okay, if you're going to
use autonomous or AI technology to run a truck that's
like over a certain amount of weight, then there should
be a human safety operator making sure it's not hitting people,

(21:42):
not running into people. It passed both houses. News new
some vetos it, and two weeks later there's the there's
the autonomous car that drags the pedestrian across San Francisco.

Speaker 1 (21:54):
If that timing had been different, yeah, they.

Speaker 2 (21:57):
Thank you, Gavin, very good Gavin, And.

Speaker 1 (22:00):
It wouldn't have necessarily applied.

Speaker 3 (22:02):
But I think the optics would have been different, so
he wouldn't have been able to pay anyways. Newsome is
is perfectly happy to veto a lot of this stuff.
Ex The difference is is his constituents here in California
they care about this. And in the case of that
of the Lead Act that we were talking about, that
one of the reasons that it stands any chance at
all is that his his wife is like a vocal

(22:26):
supporter of the Lead Act and she's been on some
stages saying like, I think we should protect our children
from AI slop and and and so it's gonna he's
gonna it's gonna be like his wife on one hand.

Speaker 1 (22:38):
And all of Silicon Valley's lobbying for us on the other.

Speaker 2 (22:42):
You some verses, news, some Jesus Christ. Yeah, so it's
with So here's something that why does no one have
a push any bills that actually I realized that even
doomed ones. Why did it seems that there's a lack
of any bills that are like actually aimed at the
technology written by people who have used it, for example? Yeah,

(23:03):
why does that never happen?

Speaker 3 (23:06):
I mean, we still have this mentality ingrained in this
in this country, especially in the political class that just
doesn't really have a lot of hands on, you know,
experience with the tech. It's changing a little bit, but
I think It's just it's ideology. It's just like, oh, well,
like we don't want to stifle innovation. We don't, you know,

(23:28):
if they need to build you know, a million data
centers and you know in circle the nation and in
in in stargate out posts, then then I guess that's
then we'll defer to you.

Speaker 1 (23:41):
It's always been this way.

Speaker 3 (23:42):
It's always been deferring to industry and then uh and
then reacting right and then saying like, oh, oh well
that was a bridge too far, and then they try
to you know, get some so like there are laws
on the book now like ten years after social media
sort of you know, was on the scene, don't. They
don't even direct the problem though, Yeah, well, I mean

(24:04):
the biggest problem.

Speaker 1 (24:05):
Yeah no, I mean there's been some anti trust effort.

Speaker 3 (24:07):
That's the closest you've probably gotten to some decent efforts
at at at you know, reining in the giants.

Speaker 1 (24:14):
Close us up. What would you what would be the
regulation that you.

Speaker 2 (24:17):
Would want to push through other than the stopping the
kids getting the AI suicide l M, that one seems
pretty good.

Speaker 1 (24:24):
But a dream a dream bill for you?

Speaker 2 (24:27):
What might it look like? What are the things that
you actually think need to be strict.

Speaker 3 (24:30):
I mean, honestly, like I we we have not even
touched the We barely got to the tip of the
iceberg here about what needs to happen, because I mean,
the things that that you know that I worry about
are the same ones that you cited, uh, which is
with the environmental impacts of just but that's almost a
separate issue, right, And in fact, all that has happened

(24:53):
on the legislative and policy front is that the A
companies have convinced they did this to Biden, by the way,
it wasn't Trump. They convinced them to lacks environmental regulations
so they could just build more data centers without without
being subject to as many fines.

Speaker 1 (25:06):
So you have to get serious.

Speaker 2 (25:07):
I saw some regulation around They have to report that
power draw somewhere. Yeah, like they have to talk about
how much energy they using?

Speaker 1 (25:14):
Wow wow right yeah, And I you know, I do
think that.

Speaker 3 (25:21):
I mean, and most of these bills are written in
a way that doesn't just apply to AI. There's there
was a decent there's a I mean again it's everything's
just been battered to bits by the lobbying machine. But
there is a bill that he did sign that sort
of semi restores gig workers' rights to unionize.

Speaker 1 (25:41):
That's it.

Speaker 3 (25:42):
It's it's good, but there's a big asterisk to it.
It's a little wonkey, so we won't get into it.
But it's a good a good step. We need gig
workers to be able to organize right now. They can't
in most of the states, so that I do think,
you know, workplace protections and detecting against uh automated hiring

(26:03):
and firing.

Speaker 1 (26:04):
Shit and wage depressure, the no robot bost again.

Speaker 3 (26:08):
It's like even the sponsor of the bill, UH, Lorena Gonzalez,
who's with the California Labor Fed, I talked to her
and she's like, look, this is what we could get
through nothing else that we had in there that like
really seriously banned discrimination. And it's really not like it's
not about the tools again, it's about allowing bosses to
use this as an accountability sink or to offload saying O, yeah,

(26:30):
I think he's.

Speaker 2 (26:31):
Kind of where you need to regulate sometimes, Yeah, exact,
especially when you don't going to regulate the fucking technology.

Speaker 1 (26:37):
Yeah, exactly.

Speaker 3 (26:38):
So, I mean there's a there's a million things that
need that need to happen. I mean, I think antitrust
absolutely needs to happen. But I you know, I would,
I would get way more radical than than anything that
that that's being even even talked about right now, because
right now it's it's just profoundly anti democratic where we're
at right now, Silicon Valley is.

Speaker 1 (26:59):
Just they're just you know, better than anybody. They're just hoarding.

Speaker 2 (27:06):
Again, this is the one thing, and maybe this is
a dumb ass's opinion, but why do you have to
fucking listen to lobbyists? You?

Speaker 1 (27:13):
I mean, you do not. Otherwise you do not.

Speaker 2 (27:16):
Yeah, like I realized, I'm not a big politics noer,
but it feels like you could just not talk to them.
I guess that sometimes they contribute to your political campaigns,
but just just don't do it.

Speaker 1 (27:29):
That's it. Yeah, this is how I end to politics.

Speaker 2 (27:32):
I'm just like, well, if you didn't listen to just
he just didn't pick up.

Speaker 1 (27:40):
Why not? I don't want to talk to you.

Speaker 2 (27:42):
Sound really annoying. You keep I want to run this bill.
You keep texting and calling me saying I should, and
that's going to block you politicians.

Speaker 1 (27:50):
Listen, Let's get you a soapbox. I'll vote for exit
trount all right, Brian, where can people find you?

Speaker 3 (27:55):
I am blood in theemachine dot com uh and and
Brian Merchant and most social media platforms.

Speaker 1 (28:02):
Lovely thank you for joining me. This has of course
been your Better Offline jewelog for the week.

Speaker 2 (28:07):
Back next week with an interview with Stephen Burke from
Games Next Us.

Speaker 1 (28:10):
Thank you for listening, everyone, Thank you for listening to
Better Offline.

Speaker 2 (28:22):
The editor and composer of the Better Offline theme song
is Matasowski. You can check out more of his music
and audio projects at Mattasowski dot com, m A T
T O S O W s ki dot com. You
can email me at easy at Better Offline dot com
or visit Better Offline dot com to find more podcast
links and of course my newsletter. I also really recommend

(28:43):
you go to chat dot Where's youreed dot at to
visit the discord, and go to our slash Better Offline
to check out our reddit. Thank you so much for listening.

Speaker 1 (28:52):
Better Offline is a production of cool Zone Media. For
more from cool Zone Media, visit our website cool Zonemedia
dot com or check us out out on the iHeartRadio app.
Apple Podcasts or wherever you get your podcasts.
Advertise With Us

Host

Ed Zitron

Ed Zitron

Popular Podcasts

Dateline NBC

Dateline NBC

Current and classic episodes, featuring compelling true-crime mysteries, powerful documentaries and in-depth investigations. Follow now to get the latest episodes of Dateline NBC completely free, or subscribe to Dateline Premium for ad-free listening and exclusive bonus content: DatelinePremium.com

The Breakfast Club

The Breakfast Club

The World's Most Dangerous Morning Show, The Breakfast Club, With DJ Envy, Jess Hilarious, And Charlamagne Tha God!

Music, radio and podcasts, all free. Listen online or download the iHeart App.

Connect

© 2025 iHeartMedia, Inc.