All Episodes

October 21, 2025 25 mins

California just passed the nation's first AI safety and transparency law—a landmark moment that could reshape how we regulate artificial intelligence across the country. But is this groundbreaking legislation enough to protect consumers while keeping innovation alive?

In this episode, Dave and Chris dive deep into California's pioneering AI bill, exploring everything from pre-release safety testing to whistleblower protections.

We tackle the tough questions:

  • Are current consumer protections sufficient?
  • Should military AI play by different rules?
  • And can global cooperation on AI regulation actually work?


Whether you're an AI founder trying to stay ahead of regulations or simply concerned about the ethical implications of this rapidly evolving technology, this conversation will challenge your assumptions about the future of AI governance.

Join us as we explore what California's bold move means for startups, innovation, and the future of responsible AI development.

Mark as Played
Transcript

Episode Transcript

Available transcripts are automatically generated. Complete accuracy is not guaranteed.
(00:00):
Yeah, this is not like this can't just be left to the
private sector, right? I can't believe I'm saying this,
but like it cannot be left to the private sector.
Cause we know the, the incentiveof the private sector is to
drive shareholder value, right? But the shareholder value might
not align with the literal health of the shareholders.
Let's get it rolling. Big ideas, Money, hustle, Smart

(00:24):
dream. So why turn that grinding
through a? Joy Ride.
Hey. SB53, California, Yes.
Yeah, you don't. You like, did you watch the OC?
It was more in your wheelhouse Ifeel like.

(00:44):
No, actually you've been old for.
It yeah, I think it was a little.
Too old. Yeah, my buddy Dan is like super
into it and all I know is various hilarious bits.
Or like I know the bit when Marissa shoots Trey and he goes
what you say and then we're asking all did a spoof where?
They all, they all do you. Like that at the end, there's

(01:05):
like the two like cops in there too that they end up shooting
three. That's a good throwback.
To like now every 22,000 contains word California, we're
gonna go into a flash. Yeah.
Like the OC, you know. That's great content, Chris.
That was back in the days when the Internet was just fun.
Yeah. You know, it was like it wasn't
as commercialized as it is. Yeah, I remember the good old

(01:27):
days of the Internet before everybody started yelling at
everyone. But anyway, OK, So what I want
to talk about this week is is OKthe California AI little bill,
the bill passed a few weeks ago.So so this.
Is SP53 Californias AI you just safety and transparency law just

(01:49):
so that we can get this into? The transcript it's just
precious heard a instead of SEO.It's like LM optimization.
Oh yeah, yeah, I'm sure that's the thing.
Yeah, OK, so can I go through what I wanna do is set the table
because I know not everybody necessarily know all the
specifics that they go through. Let's like First off, so it is

(02:10):
the first state to do this. Europe has already passed
something for a I think it was like 2 years ago.
But this is the first time in the US is happening.
It kind of matters that it's in California because that's where
all the a lot of the AI companies are like, you know,
like like importantly like rock is, is not like Elon Musk
companies in Texas. Right.
Oht Really. So there is there are some like

(02:31):
nuances here. Yeah.
But let's talk about what it's all about.
So. So there's a couple of big
buckets of what this thing covers.
So fresh, it's pre-release safety testing, disclosure
reports, like a nutrition label almost for AI, the California I
safety office being established.Yeah.
And financial penalties. So it's are the first one.
So Frontier AI developers must run tests before launch to check

(02:52):
whether they're models can be misused.
Things like mass disinformation,bias, discrimination.
Yeah. Cyber security you.
Know these exploits yeah OK the expectation is documented test
protocols not just like you knowtrust me I'm a doctor kind of
thing like it's very much like show us and I would think that a
lot of that is going to be effectively based on what

(03:12):
Anthropic does which is this like disclosure of almost.
Everything they they're probablya leader in this, I think right
now. I would say I would definitely
pick them as the safest AI platform, right.
And actually, and again, I mentioned this on an episode I
can't remember last week, I think it was a week.
And it was how we were talking about how in the Scott Galloway
office hours they did. And I can't remember the other

(03:32):
guys name, but he he's an AI leader.
And I'm really embarrassed. I don't remember his name right
now. But he was talking about how
it's it's it's like you want to speak with your wallet on this
stuff. So if like safety matters to you
as a consumer, right? Like you probably should be
leaning towards anthropic. Sure.
I thought that was pretty cool. And they specifically don't use
like Grock and stuff like that because they don't trust Elon.
Yeah. And I'm like, yeah, I get

(03:53):
behind. That, yeah, I could.
You know, like a name like Grok who would be scary like clothes.
Seems very nice. Yeah, clothes.
Clothes could be French, we don't know who like.
So 2 disclosure reports. So developers must release plain
language reports that explain what kind of data is the train
to the model, it's known risks and limitations and what
safeguards are in place. So it's kind of like a cross

(04:16):
between an investor prospectus and like a food label.
That's why I like this as a, by the way, I got I to help me
write that this specific little spot, alright.
But the transparency is meant for both, like, public folks,
like people. Yeah.
And regulators thought that was pretty good.
And then there's this establishment of the AI Safety
Office, which I'm skeptical of, but it's a new state agency
tasked with monitoring and compliance and advising

(04:36):
lawmakers. It doesn't yet approve models
the way the FDA does, but it candemand by law fixes or impose
penalties if standards aren't met.
So to that end, the fines for these sorts of things, companies
that don't comply with these directives from that office can
face fines of up to $10,000 per violation per day grow, so they

(04:59):
could stack. They could pretty.
Quickly, Yep, I think. But you gotta remember, too,
this is when we did an episode on this a little while ago where
it's like, this is also the world of, you know, raising
billions of dollars, you know, through private.
Equity, right? Like they're not gonna blink at
those fines. I think it'll it's something,
but I don't know. It's not gonna make a dent, not
even. Gonna make a dent.

(05:19):
And the state can also seek civil penalties in court, which
could escalate costs. You know, that's when you get
into like the lawyer territory and all this stuff.
So I guess the first thing I want to get at here is what is
your hot take on whether or not we even need AI regulation?
Let's zoom out for a second. Do you worry that?
Oh, by the way, another place that has AI regulation.
Yeah, China. China.

(05:40):
Yeah. OK.
So what do you worry that this stifles innovation on this?
Do you worry that this is not necessary or do you think this
is like something? We have so I'm definitely pro AI
safety. I think that is is scary.
You don't have to fight not to have that yeah.
I do think that secretly US, China, every country will have
their military AI that will not be subject to any type that's

(06:02):
regulating safety thing. I agree with you cause they're
gonna make robots that fight fight each other.
Actually, crazy thing, I saw a video this morning of Optimus
Tesla's robot. Firing an AK-40.
Seven no no fighting Kung Fu what yeah like with a with
against like a human OK did it win now it was a little bit
choreographed looking OK but it was it was doing the moves it

(06:26):
was very fluid it. Looked awesome.
Was it making the decision to dothose moves or was it like a?
Pre, I don't know it was. It feels.
Like which which bugs me. It was tethered to some sort of
somebody that's doing that, yeah.
That's the big shadow puppet. At that point, OK strings are
visible. No, but it but you know, like

(06:48):
it's going in that direction. So I think you've you've gotta
see like civilian AI in one category.
We're gonna have safety and I think military stuff is gonna be
the Wild West. It's gonna be totally covert
types. What do you think?
Of that like a top. Secret Project 2027 or whatever,
right? That's one of.
Those predictions and Jeffrey Hinton, the godfather of it's
Jeffrey. Yeah, yeah, yeah.
Geoffrey Hinton, the godfather of AI talking about this too,

(07:10):
that it's like probably in. I feel like any good technology
finds it unfortunately it's initial use military right
things. So anyway.
OK. So definitely pro AI safety.
I like, but it doesn't mitigate that like in your own comments.
So you're so you, you just thinksafety is good for the civilian

(07:31):
infrastructure? Look, I don't think we're gonna
be able to argue or touch or know about.
I don't think we're ever gonna know about what's going on with
AI and military. They're not going to tell us.
OK, until something happens. Until, you know, until we see
the B2 bomber take off and go, what the heck is?
That, yeah. Like honestly, I don't think
we're gonna know about that the but as far as like the civilian

(07:54):
side of it, obviously the transparency is a really big
deal. I like that a lot.
The incident reporting. So when they do have a safety
incident, they're required to report it.
I think it was 15 days and if it's imminent harm then it's 24
hours. So that's really.
Good does go full Skynet. Yeah, the other, the other piece

(08:15):
on this, which I think is critical of the whistleblower
protections. So if somebody's working an AI
company and one of those things happens, yeah.
And management doesn't do anything about it.
They report it, whatever they'rethere can't be any repercussions
for, you know, whistleblowing onan AIC.
Means those are totally other debate here, which is that are
there really there's always repercussions to whistleblowing

(08:37):
like but it's just they won't. Tell you know you gotta try this
is. Worth a lot of money to a lot of
people. Like, yeah, I would be worried
for my life. If I something like that, I
think that the cost of not coming forward when you have a
safety incident for sure is, andI'm coming out later is much
more than if you just say, hey, yeah, it went wild on this.
In fact, we're gonna share all this with the other AI companies

(09:00):
and show them, hey, this is a potential issue that we should
solve as an industry. So I think it's critical that we
create that safe environment forthe transparency and the
whistleblowers and all that sort.
Of not disagreeing, I just think.
It's otherwise what are we doing?
No. Well, I I'm not disagreeing, but
I just think it's, it's gonna bereally tough to get people to do
that, but I think that's a good thing to have.
Yeah, very. OK, so.

(09:22):
Whatever, screw you so. So the thinking about this as
well, I, I actually wish it, it,it only applies to $500 million
revenue companies and higher. I think that was.
In California. I thought this was because.
So, yeah, there's some issues here, OK.
In fact, I would think about ourlittle company which was doing
like 7 to 10 million AR. Yeah.

(09:44):
And we were implementing AI. I think we should voluntarily
have put this sort of stuff, youknow, a lot of it is gonna be
based on the underlying model that we're using.
We do not we did not have enoughmoney to build our own LM and
yeah, you know, do any frontier,you know, yeah, models or
anything like that. But I think we could have stood
on the shoulders of giants. You said, hey, we're using these

(10:05):
AI platforms, but here's our internal, you know, code of
conduct and how we operate on those as.
Well, yeah, You know, I was kindof wondering if they would do
something like that where you have to have like if you're
using AI in any capacity that you have a code of.
Conduct like you have a privacy policy the the GDP.
Cyber security policies like your your employment policies

(10:25):
like all these other it's very common like as soon as you're
above a certain size threshold exactly like.
Everybody's gonna privacy policyand terms of use on their
website now. Yeah, why don't we have an AI
policy on on everybody's website?
And maybe there's some standardized ones like for
instance, like software licensing, There's there's MIT
licenses, there's different opensource licenses, stuff like
that. I wish there could be just some
like off the shelf AI, you know,policies that you can then say

(10:49):
we follow. I again, I make the click noise,
but I doubt the click Mike is picking it up.
But we follow this particular AIpolicy and then people can look
at it and say, oh, they're following that one.
That's a good idea. They will do this and that.
Yeah, I like that. I like.
So some standardization on it, especially for small companies
so that it doesn't become. I could just imagine.
I'm certain right now people that are going out for software

(11:12):
RFP's or tenders are putting in there.
We need your AI policies and they don't care what size
company you are. So I think it's one of those
things we need to get. I would.
Hope so. I do actually.
Have everything else, of course.They must have like an AI
disclosure policy. If the description absolutely,
absolutely. And I'm sure there's a big part
of it that's you're you're not allowed to use our data to train
your. AI so part of this is that I
know that part of the Chinese policy is you have to disclose

(11:35):
when you're using a yeah, like no matter what you're doing.
So if you're in the market with like a like an advertisement or
something, yeah. And you use it to do you have
to. Disclose, but you gotta like
ask? I'm not sure.
On the advertisement. But there's some point where you
make the disclosure to the the ruling party and the people, I
guess, yeah. So then the other thing I'm not
loving about this is just California right now, obviously
that groups. In.

(11:55):
I mean, you gotta do it. You gotta you gotta start
somewhere, but I hear. You right so they're never.
Gonna do a federally right now. No, I I don't.
Think there's any chance that happens?
Federal I almost think stuff like this should be like there
should be some sort of like the UN should be playing.
A role, but then the countries have to opt into it like so
International Criminal Court. Who's not a member?
US. US.
States so like, you know. Yeah, right.

(12:16):
OK, but I. Hear you.
But still, I think there's probably an approach and there's
probably a lot of common ground.If China is already regulating,
the US is starting to regulate it, Europe already regulating
it, it's probably a lot of overlap in the way these
policies are being developed. So maybe there's an opportunity
to blow this up to a global solution, or at least a.
Lot of interest in that, but I think there's, I think again,

(12:37):
you would never get the United States to do it.
I think you could have the because they don't want to be
held back by an international order, you know what I mean?
Like the United States, despite failing UN and having the UN,
doesn't like the UN. That much dude?
The first bullet is like military use of AR excluded from
this policy. You and I both know that because
nobody's gonna sign on to something that's gonna, you

(12:57):
know, handcuff them for developing.
Milli mean when they. Know the other side is doing it.
Russia, SALT 1, SALT two, we hadnuclear nonproliferation
treaties. Like I don't think it's that
outrageous. I do think that right now,
because it's in it's like early stages, it's more difficult.
But I just that's kind of a separate point.
I just fundamentally think that a lot of nations will resist it.

(13:20):
I mean, imagine, I don't think you get Russia to the table, not
that they seem to be pretty far behind in this whole thing
anyway. But like, you know, another
example though, like I just think you'd have a really hard
time doing it internationally. I think you'd focus.
I think you have like a almost like a Security Council of AI,
like advanced nations. Yeah.
Which sadly, I don't think Canada would be a part of it.
But I think you have those nations involved in it and, you

(13:41):
know, you have them establishingkind of code of conduct and
rules in the Geneva Convention. Maybe that helps.
Yeah. But agreed that I think in
principle, I agree with you. That would be nice if it was
broader to the entire United States as a as opposed to just
California. Yeah.
But yeah, I and and I think it would be complicated though to
try and execute that. Yeah, Yeah.

(14:01):
I think there's also a bit of a risk here on like startups.
So if you're a startup and you're working in AI and and
somehow you've raised enough money or you've got enough
compute or you've got enough data that you can actually be
one of those foundational modelswhere you're generating what was
getting close one that. Did this really well.
That did it for like 1/10. Of the deep.
DeepSeek, DeepSeek, right? Yeah.

(14:21):
You kind of like. So this is actually another one
like that, yeah. Good.
That'll be they probably would have been under the 500 100%
unregulated but very advanced AI, right.
So yeah, there's a lot. Of I think it was very, they
really specifically targeted like big organizations, yeah.
Because like fair enough. Like the the most of the AI
investment and all that kind of stuff is concentrated in like a

(14:44):
couple of players. Yeah.
But I do think it's dangerous tothink that a small shop couldn't
do something material with it, even with trying to do their own
LM. And it's a good point.
So there's $500 million. Threshold.
I think that's. Wrong.
That's gonna could have done better.
That's gonna go down I think pretty quick or you know, I can
think of two of like companies that maybe don't have any
revenue, OK, but have a lot of funding, Yeah.

(15:06):
And are, you know, building. What happens if I'm in the
Cayman Islands? Is my corporation?
Yeah, you know that kind of bullshit.
Reaction shopping will happen for sure, right?
Right. So, but at the same time, I
think it's gotta be like consumers are driving it because
they're saying, oh, these are responsible AI companies that
follow SB53, that have clear policies, that whistleblower
protection, like all those things I'm gonna buy, I'm gonna

(15:29):
vote by by spending my money with them and grow those
companies. Yeah, rather than, you know,
the. Office tree like that's the the
ideally the invisible hand here kind of helps with some of this
stuff. Tired of business authors and
influencers who have never had asuccessful business.
Me too. I sold my company for $40
million and I want the same success for you.

(15:51):
Check out Startup Different on Amazon, Audible, and Kindle.
And the other thing I wanted to get at the, well, we're just
before you continue while we're talking about consumers is what
I also found kind of interestingis that there's, there wasn't
like, so the consumer protectionpiece of this was trying to

(16:11):
provide a bit more transparency,but I think they could have gone
a bit far further with like whathave we learned in the past 20
years about social? Media.
Right. It really fucks up young kids.
Yeah, OK. It's really dangerous, her young
minds. Oh, yeah.
And so I'm kind of like, why didn't they come out?
Or maybe this is a future thing,but I kind of worry that, like,

(16:32):
my kids are gonna be the Guinea pigs of this generation.
So why isn't it kind of like, isn't it like some kid who got
talked into killing himself by AI and stuff like that?
Oh, God. Like, there's some really fucked
up shit. Yeah.
And so you're kind of like, why don't we make it so like, you
gonna be like 16 or older, maybeeven older than that to to use
these tools to have a license for these tools.
We're starting to do all this online verification and all

(16:52):
these other things. Like I actually think that this
is like kind of intense, right And or at.
Least have the AI train know theage of the user would be trained
on, you know, detecting those types of things and like
immediately let mom and dad knowif they're asking stuff about,
you know, they're. Should be from the.
Conversation that goes into likethe transparency piece and some
of the innovation that I think is gonna have to.

(17:13):
Happen in that space. You think about how long it's
taken though, like I don't thinkI like, how about that?
Now, somebody asked me the otherday.
My, my kid is 4 and one of them anyway, the other one and
somebody asked me, what are you gonna do for a smartphone for
them when the time comes? I was like, holy shit.
Like first, I think it was you actually, but like not for a
long time. 1 Secondly, is like there are smartphones now that

(17:34):
you can get that like kids smartphones that only allow
certain apps, only allows certain messages from certain
people, only calls from certain people.
Like really like limited lockdown and generally the
review to be pretty effective, right?
And no social media accounts, that kind of stuff.
And I'm like, we need like that like now for I, I don't know
what this looks like it maybe it's a kids mode, YouTube Kids,
but like, you know, open AI kids, you know, something that

(17:56):
like really locks on this thing because I just worry that it's
going to manipulate minds. It's going to make them lazier.
You know, we talked about the spell check thing and like that
comparison previously, yeah, could in some cases make them
like Dumber. So I, there's a whole bunch of
things where I was like, well, what about that?
Yeah. And instead this felt more like
a let's prevent the end of the world, which I'm also down for.

(18:19):
Yeah, but I I just think that if.
We're going to. So you're thinking this is
really aiming high and we need more pragmatic laws around this,
around everyday use of, I think.The consumer protection is like
laughable. I think the punitive measures
for people building malicious AIis reasonably effective.
Yeah, my hot. Take so how do you so let's say

(18:41):
they did want to do that sort ofstuff.
How are you thinking they implement this stuff?
Is this like I remember when we were running our company, we we
had like third party penetrationtesters come and try and hack
our software and things like that.
Is that can Ioffer you the way that this should be regulated is
like. Through testing, that's not a
bad idea. I think there's a certain like
the burden of proof unfortunately remains like on

(19:02):
those companies, right, which for better or for worse, I think
there's a couple of things that happen.
I think that for any of these major corporations open AI
anthropic. So I guess like X AI, there's a
fourth one I can't remember right now that's embarrassing.
But like those the big ones, I think that the and this do this

(19:24):
the capitalist way. I think the government of the
United States should make major investments in each one and sit
on each board. Huh.
That's interesting. OK.
So there are fiscally tied to them, yeah, they have a
fiduciary responsibility into capacity to people.
But you're gonna remember, if you're on a board, you have a
responsibility to shareholders. This is not to the society.

(19:45):
This is. Too important.
This is not like this can't justbe left to the private sector,
right? I can't believe I'm saying this,
but like it cannot be left to the private sector because we
know the the incentive of the private sector is to drive
shareholder value, right? But the shareholder value, it
might not align with the literalhealth of the shareholders.
Is there precedent? For this is like thinking back

(20:06):
when you. Know would you let private
companies make nukes? Yes, that's exactly what I was
gonna ask you. So how did that evolve?
Because there are private companies that build nuclear
reactors and power plants and stuff like that.
It presumably they could extend that and go to the point where
they were like creating nuclear.Weapons comic energy Commission,
like they, they, they think about like it's all tied, like

(20:28):
it's all quasi. Like in Canada, we use the term
a crown corporation, right? Like quasi public, quasi private
organizations, right. But I think that that's going to
get you the real oversight. The other thing I was thinking
about here too, like if you really wanted to, you could
totally sandbag or just excuse me, snowball the regulatory
organization, right? So think about this.

(20:49):
So how do you wanna, this is kind of a funny thing.
How do you wanna make it difficult for the CRA or I guess
the IRS United States to assess your corporate taxes?
You make it a shit load of documents and you fucking fax it
to them and you say good luck, you fuckers, never you're being
you want to audit me? OK, it's coming in hot.

(21:11):
Fire up the fax machine. But like, so like, I just think
of like regulatory, like that's one thing that can get slowed
down. I think you have to have
multiple levels of overlapping oversight.
It is, it has to intentionally slow the process, right?
And I think that's what so like it's, it's really weird that I,
I want to advise this, but it's like, I just feel like the on

(21:33):
the far end of this, things get way out of control.
If it, if it goes bad, it goes real bad.
And so I feel like this is something, this is the way that
we should attack it. But I like that it's still like
we're making an investment for the technology.
But now as the governments, we have a say as a special kind of
board member. Yeah.
That has a both a a a fiduciary responsibility, I suppose, but

(21:53):
also responsibility to the people of, well, in that case,
the United States. OK, the AI startup, the little
guy like us. OK, what advice do we give them?
You're screwed. Can I give you the first one
that I got? Alright, First of all, just stay
informed on what's happening in this space, OK?
So the second one, OK, hello. Like the the AI working groups

(22:18):
reports and I think you wanna belike super part proactive on
like we were talking about like you have your privacy policy,
have your terms of service and you have your AI policy pop up
on the bottom of your website and everybody's gotta click I
accept or accept all or whatever.
You know, that kind of stuff that you're way ahead of it.
You know, you're going to participate in the industry
standards development. So you're going to be following

(22:40):
the news on this sort of stuff. And you also need to be thinking
long term about AI safety and your company.
So if we were implementing an underlying model, OK, I would be
looking at how do I the output from that model?
How do I vet that output to makesure that I am being safe to my
customers so. Yeah, like I, I, I totally agree

(23:02):
with this. I also think that it should
align with your company. And I find it hard to believe
that any company out there wouldn't align with the mission
of like an anthropic that's like, don't ruin the world.
Yeah, You know what I mean? So.
Like a lot of things get ahead of this, right?
Especially if you're a startup, you don't wanna be playing catch
up. You're probably gonna see it in
RFP's and that sort of stuff anyways.

(23:23):
You wanna be on the leading edgeof that.
You don't wanna be having, I think.
Customers would respect that. I think that's a real asset if
you're in your sales, if you're doing enterprise sales and
you're like, by the way, we're. So we, yeah, we use AI, but we
have our own policies on it and we take it really seriously.
And here's our documentation. Here's a video on how we use
that. I think that really actually
shows a lot of. Professionalism I would have.
I would call it the responsible AI policy.

(23:44):
Yeah. Here's our response.
I would. Love, they would love that.
Yeah, yeah, yeah. And yeah, you know, I think the
big thing is transparency. So show how you're using AI,
show when decisions or content or whatever your product is
doing, parts of it are made by aI have that little asterisk next
to it. This was generated by AI, Yeah,
that type of stuff. This can be.

(24:05):
Wrong. Yeah, and obviously train your
staff, make sure your staff are keeping an eye out,
understanding the risks with AI,when to see it, be the
whistleblower and bring it to management's attention and be
very transparent about it. You know how like if you have a
security breach, a cyber security breach, you wanna be
proactive and tell people about it in advance?
You have to be thinking, I thinkyou would do the same thing,

(24:26):
obligate an AI. Yeah, issue.
OK, that's good. Yeah, that's what they're that's
effectively what they're at their they're legislating.
Yeah that you it's like disclosure of.
So in cyber security case. Yeah, it's like we lost all your
personal information was compromised and here's your
credit. You get the credit score,
company shows up at a free subscription for all.

(24:48):
Great. Yeah.
And then a month later, the credit score, Yeah, credit risk
protection company is. Hacked.
Yeah. Yeah, great.
But but, but yeah. But I do think there is
something like, yeah, there's gotta be some sort of like
public facing disclosure. Yeah.
This is where I actually think it would be nice if these
companies IPO because there'd bea lot more visibility in some of

(25:09):
this, I think. Yeah.
But until they do that, these are all privately held.
And they are hot. To very different.
Standard lot of companies that are smaller that aren't gonna do
anytime soon that are doing thisstuff.
So yeah. 100%, yeah. Cool.
Well, anyway, so we think it's good.
We wish it did more, and hopefully the world doesn't end.

(25:31):
That's our lovely way to end theshow.
See you later, folks. Hey.
Let's get it rolling. Big ideas, Money, hustle, Smart
dream. So why turn that grinding
through a joy ride?
Advertise With Us

Popular Podcasts

Las Culturistas with Matt Rogers and Bowen Yang

Las Culturistas with Matt Rogers and Bowen Yang

Ding dong! Join your culture consultants, Matt Rogers and Bowen Yang, on an unforgettable journey into the beating heart of CULTURE. Alongside sizzling special guests, they GET INTO the hottest pop-culture moments of the day and the formative cultural experiences that turned them into Culturistas. Produced by the Big Money Players Network and iHeartRadio.

Crime Junkie

Crime Junkie

Does hearing about a true crime case always leave you scouring the internet for the truth behind the story? Dive into your next mystery with Crime Junkie. Every Monday, join your host Ashley Flowers as she unravels all the details of infamous and underreported true crime cases with her best friend Brit Prawat. From cold cases to missing persons and heroes in our community who seek justice, Crime Junkie is your destination for theories and stories you won’t hear anywhere else. Whether you're a seasoned true crime enthusiast or new to the genre, you'll find yourself on the edge of your seat awaiting a new episode every Monday. If you can never get enough true crime... Congratulations, you’ve found your people. Follow to join a community of Crime Junkies! Crime Junkie is presented by audiochuck Media Company.

Stuff You Should Know

Stuff You Should Know

If you've ever wanted to know about champagne, satanism, the Stonewall Uprising, chaos theory, LSD, El Nino, true crime and Rosa Parks, then look no further. Josh and Chuck have you covered.

Music, radio and podcasts, all free. Listen online or download the iHeart App.

Connect

© 2025 iHeartMedia, Inc.