All Episodes

September 23, 2025 10 mins
Mark as Played
Transcript

Episode Transcript

Available transcripts are automatically generated. Complete accuracy is not guaranteed.
Speaker 1 (00:00):
All right, let's do something completely different.

Speaker 2 (00:02):
I'm very pleased to be joined on the show for
the first time Kevin Fraser. Kevin is a AI Innovation
and law fellow at the University of Texas Law School,
and we're going to talk about regulating AI. But before
we talk about AI, Kevin, you are an attorney and
you think deep thoughts about all kinds of things. And

(00:23):
you heard my rant just then, and I'm wondering, and
you don't know me, so say anything you want, like,
am I a little bit right? Am I a little
Am I very right? Am I completely off base? What
did you think of my rant?

Speaker 1 (00:38):
Ross?

Speaker 3 (00:39):
Hopefully this isn't the last time I'm on.

Speaker 1 (00:41):
It's great to be here. Thanks for having me.

Speaker 3 (00:43):
And I like to think I think deep thoughts, and
clearly you do too. I have to say, when it
comes to some of the many deep political debates we're
having right now, you're spot on. We need to go
back to first principles. I think we've really lost sight
of what were the values that we're animating, and how
we set up the constitution to begin with, and why
we relied on a republican style of government, and when

(01:06):
it comes to having a limited government with enumerated powers,
seen jaw boning occur that results in the chilling of
speech is something that should raise a red flag for
all Americans.

Speaker 2 (01:17):
Yeah, and I note that just three years ago Brendan
Carr posted a tweet using that exact term jaw boning,
saying that the federal government cannot jawbone a private company
to restrict free speech. And that is not different in
any important way from the federal government doing the restriction themselves,

(01:39):
which was ended up with a case where we got
the wrong result regarding Facebook in the federal government a
year or two ago. But we'll move on from that.
So AAI is everywhere, all the time. AI is going
to transform the world in so many ways that we
can't even begin to imagine right now. It's gonna have
good effects, it's gonna have bad effects, just like almost
any technology does. And part of what's going on right now,

(02:03):
and Colorado is kind of a poster child for it.
You may be aware of this already. I assume you are,
is that there are a lot of folks who are
focusing only on the potential negatives and to want to
regulate the the Jesus out of it. Including at the
state level, and you've got some thoughts, of deep thoughts
about that.

Speaker 3 (02:20):
I like to think, so when we go back to
first principles again and understand what is the purpose of
the national government, it's to make sure we're advancing national
well being, focusing on things like national security and economic stability.
And regardless of what you think about AI, if you
think it's going to end the world, if you think
it's going to solve every matter and cure cancer and

(02:42):
so on and so forth, those are big economic questions.
Those are big national security questions. So when it comes
to regulating the frontier of AI, when it comes to
thinking about how are these models going to be trained,
who's going to train them and to what extent, And
if we have fifty states from Colorado to California to

(03:02):
New York trying to dictate the actual frontier of AI,
how we're progressing this technology that has national implications, and
that runs a foul of a federal system in which
the federal government oversees those big national questions.

Speaker 1 (03:17):
Now, Ross, I'm.

Speaker 3 (03:18):
Not saying that we should clear out states and not
allow them to regulate what is a very important industry,
but we need to make sure that states have what
I like to refer to as regulatory humility. No state
has the authority to project its legislation into another.

Speaker 1 (03:37):
The fact that we even have become.

Speaker 3 (03:39):
Sensitive and aware of the term of the quote unquote
California effect, the idea that the laws that get passed
in Sacramento spread across the country. That would just be
crazy to hear about for our founders, that we're tolerating
this notion of Sacramento getting to write the rules for
the rest of the country, especially when we're talking about

(04:00):
something as sensitive as AI. And so, what I'm really
encouraging Congress to do, and what I'm encouraging state legislators
to do, is to take seriously what it means for
states to operate as laboratories of democracy.

Speaker 1 (04:14):
This looks like actually.

Speaker 3 (04:16):
Running experiments, which means finite periods of time, having specific goals,
and learning from the outcomes. And if states do that,
then I'm all for them experimenting with how best to
regulate the use of AI while leaving AI innovation to
the federal government.

Speaker 2 (04:32):
Or speaking with Kevin Fraser, Kevin is AI innovation and
law fellow at the University of Texas Law school. Well,
here in East California, we take great comfort in not
having to think about our own laws.

Speaker 1 (04:45):
The Democrats and the state legislature.

Speaker 2 (04:48):
They can get invited to all the best cocktail parties
and just say, oh yeah, we'll do that. So we're
very used to that here. But on a more serious note,
it's actually quite painful when we get you know, California especially,
it's sort of a lot of the automotive stuff which
our previous couple of governors have said, we'll just do
whatever they do, which I find which I find infuriating.

Speaker 1 (05:10):
But let's stick with AI.

Speaker 2 (05:12):
Did you pay much attention to Have you paid much
attention to the AI bill that passed here in Colorado?
That and I'm not saying this is hyperbole. It seems
like one of the worst pieces of legislation.

Speaker 1 (05:24):
I've seen in a long time.

Speaker 3 (05:26):
You know, Ross, I have paid pretty close attention to
the Colorado AI Act. And the fact that we're seeing
Governor Polis, for example, say, whoa, y'all, I think we
got a little far out over our skis is indicative
of the fact that we have this crazy rush to
regulate with respect to AI. Everyone wants to be the
state legislator who can send out that mass email and

(05:47):
said I did the thing, I regulated AI. And the
debates we're seeing in Colorado right now with respect to
SB two O five is the fact that rushing to
regulate something as complex and evolve as AI is only
going to make it look bad because we're seeing that
there's still things we have to learn. It's okay if

(06:07):
the law sometimes lags behind technology, because that gives us
an opportunity to actually experiment with it, and to your
point in earlier RUSS, to maybe identify WHOA Maybe this
technology isn't all bad. Sure there are some real drawbacks,
but let's make sure that we can explore what are
the positive outcomes. How can this improve healthcare? How can

(06:27):
this improve education? But if we rush to regulate something
like AI, especially if we allow one state to do
so and then the rest of the states have to follow,
that's going to deny us that opportunity to really learn
how to harness AI for the best of humanity.

Speaker 2 (06:42):
All right, two follow ups on what you said. First
of all, Governor Poulis should have known better. He signed
that bill, right, He signed that bill, and you're probably
aware of this. But Governor Polis is one of the
richest politicians in America, and most of the money that
he made he made in related businesses, so he's very

(07:03):
tech savvy.

Speaker 1 (07:04):
And he knew that bill was bad and signed.

Speaker 2 (07:07):
It anyway on the hope that the legislature would come
back around and fix it. But I place a lot
of the blame for this on Jared, who is my friend.

Speaker 1 (07:16):
But that was a huge mistake.

Speaker 2 (07:19):
Here's the other thing I would say, and you don't
know this, but I am president of the Bad Analogy Club,
and I'm gonna.

Speaker 1 (07:25):
Give you gratulations. Yes, thank you. I've had that for years.

Speaker 2 (07:28):
There have been many attempted usurpers, but I have fought
them off with examples like what I'm.

Speaker 1 (07:33):
Going to share with you right now.

Speaker 2 (07:35):
So imagine somebody puts a plate in front of you
of something that they think you may be allergic to,
and your reaction is that's a little scary, that might
hurt me. So to make sure that doesn't hurt me,
I'm gonna take a bunch of cyanide, okay. And then
there's somebody off to the side, and that person off

(07:56):
to the side says that person off the side says, no,
don't take cy that's ridiculous, take arsenic instead.

Speaker 1 (08:04):
Right, So, this is.

Speaker 2 (08:05):
What's going on in the legislature in Colorado right now.

Speaker 1 (08:07):
They're a little bit afraid of something you might.

Speaker 2 (08:10):
Be allergic to in the AI, and one group says,
all right, let's just kill ourselves in order so that we.

Speaker 1 (08:15):
Don't have that risk.

Speaker 2 (08:16):
And then the rest of the Democrats are say, well,
let's kill ourselves in a way that's a little less
dramatic or takes a little longer. How about that for
president of the band Analogy Club.

Speaker 3 (08:25):
You know, I think you're going to have four more years,
if not more than that. But in all seriousness, Ross,
I think that you're right that we're seeing state legislators
frame this as a binary question, which is, regulate the
heck out of AI so that we can't test it
or let it run willy nilly. We don't have to
operate in that space. We're smarter than that. We can
see your neighbor, Utah is doing some great work in

(08:48):
this space. They have the Utah Office of AI Policy.
They're running a regulatory sandbox, which is saying, let's experiment
with AI. If consumers say, WHOA, this is really risky
this is causing harms, then will intervene. But let's not
say we're going to quash innovation from the outset. And
to your point earlier, we've been here before. This is

(09:09):
not history's mysteries. This is not something on the history channel.
We know that if you get ahead of technology, if
you try to burn out the flames of innovation, that's
going to be a net negative on society. And other countries,
namely China, aren't going to govern themselves in the same way.
They're going to keep pressing full steam ahead on this issue.

Speaker 1 (09:28):
Oh, Kevin, Ah, Kevin.

Speaker 2 (09:31):
AI is scary, and we can't let people be scared,
you know, And.

Speaker 3 (09:38):
I think it's it's also worth pointing out that so
much of this use ross.

Speaker 1 (09:42):
Is being blown out of proportion.

Speaker 3 (09:46):
I want to be clear that some of the use
cases we've seen, especially with respect to team users, in
very real, very sad, absolute tragic outcomes, deserve a lot
of attention and deserve a response.

Speaker 1 (09:58):
But we have to put all of this context.

Speaker 3 (10:00):
Open ai just released a user survey report on one
point nine percent of open ai use is directed towards
disease AI companions or using it as your best friend,
or as your therapist, so on and so forth. So
we should pay attention to this, but we need to
do this as good policy makers, which means governing based
off of evidence, not off of vibes.

Speaker 2 (10:23):
Kevin Fraser, who thinks deep thoughts uh is a law
is it AI? Innovation in Law fellow at the University
of Texas School of Law. He's got the longhorn poster
behind him to prove it, or the flag rather behind
him to prove it, and oddly enough.

Speaker 1 (10:38):
For a Texas guy, a sign that says.

Speaker 2 (10:40):
SKI area boundary. We'll have to talk about that another time.
Kevin Fraser, thanks for being here.

Speaker 1 (10:45):
Great.

Speaker 3 (10:45):
Yes, we'll definitely have you back, looking forward to it,
ross have every one you too,

The Ross Kaminsky Show News

Advertise With Us

Popular Podcasts

On Purpose with Jay Shetty

On Purpose with Jay Shetty

I’m Jay Shetty host of On Purpose the worlds #1 Mental Health podcast and I’m so grateful you found us. I started this podcast 5 years ago to invite you into conversations and workshops that are designed to help make you happier, healthier and more healed. I believe that when you (yes you) feel seen, heard and understood you’re able to deal with relationship struggles, work challenges and life’s ups and downs with more ease and grace. I interview experts, celebrities, thought leaders and athletes so that we can grow our mindset, build better habits and uncover a side of them we’ve never seen before. New episodes every Monday and Friday. Your support means the world to me and I don’t take it for granted — click the follow button and leave a review to help us spread the love with On Purpose. I can’t wait for you to listen to your first or 500th episode!

Stuff You Should Know

Stuff You Should Know

If you've ever wanted to know about champagne, satanism, the Stonewall Uprising, chaos theory, LSD, El Nino, true crime and Rosa Parks, then look no further. Josh and Chuck have you covered.

The Joe Rogan Experience

The Joe Rogan Experience

The official podcast of comedian Joe Rogan.

Music, radio and podcasts, all free. Listen online or download the iHeart App.

Connect

© 2025 iHeartMedia, Inc.