Episode Transcript
Available transcripts are automatically generated. Complete accuracy is not guaranteed.
Speaker 1 (00:02):
Welcome to the Regulatory Transparency Projects Fourth Branch podcast series.
All expressions of opinion are those of the speaker.
Speaker 2 (00:18):
Hello, and welcome to the Federal societies Regulatory Transparency Projects
Fourth Branch Podcast. My name is Lippy Dickinson and I
am the assistant director with the Regulatory Transparency Project. As
a reminder, all opinions expressed are those of the speakers and.
Speaker 3 (00:31):
Not of the Federal Society.
Speaker 2 (00:33):
Today we are joined by a fantastic group of tech
and legal experts to discuss AI preemption. Our speakers today
include doctor Scott Babab Brennan, the director of the Center
for Technology Policy at NYU, Kevin Frasier, AI Innovation and
Law Fellow at the UT Austin School of Law, and last,
but certainly not least, Adam Thier, Senior Fellow of Technology
and Innovation at R Street Institute. Thank you all so
(00:56):
much for joining us today. I'll start by handing things
over to you.
Speaker 3 (00:59):
Kevin, thanks so much lyby always great to be chatting
about law and technology, especially on the Friday before Memorial
Day weekend. So before we get to burgers and camping
and tents and all that jazz, Adam, let's get into
the weeds of this really fascinating part of the AI
(01:20):
regulatory debate. So, just a few days ago, you testified
before the House Energy and Commerce Subcommittee on Commerce, Manufacturing,
and Trade, and at a high level, your testimony was
part of a broader debate about whether Congress should move
forward with a ten year moratorium on many forms of
state AI regulation and ess. Yesterday May twenty second, in
(01:43):
a vote of two hundred and fifteen to two hundred
and fourteen, the House approved the so called One Big
Beautiful Bill or HR one, which included that language. So
now with the Memorial Day weekend pending before us, we
now have that one Big Beautiful Bill, including that moratorium,
pending before the Senate. So let's just start with the
(02:06):
language itself. What does this moratorium actually call for? What
are we actually debating here?
Speaker 4 (02:12):
Yeah? Sure, thanks to fed Suck for putting out of
this podcast and Kevin for you hosting. So this moratorium
came about because Congress has grown concerned about the rise
of a problematic patchwork of state and local AI policies.
There's over one thousand AI related bills pending in the
(02:32):
United States today, and Congress has grown concerned about how
that might affect the free flow of interstate commerce in
an algorithmic sense. So what this provision does is it
basically sets up a moratorium on AI specific related state
(02:53):
and local regulatory enactments. It is an effort to basically
do what we did for the Inner Net with things
like the Internet Tax Freedom Act and other provisions of
the Telecomact, but do it now in the AI era,
which is to set up a national marketplace to ensure
that investment and innovation can happen without excessive constraints in
(03:16):
a sort of parochial mother of all patchworks developing. And
this is of course controversial, as we'll discussed today, because
of course, the states are moving very aggressively. Compared to
the early days of the Internet. When I was working
on the Internet Tax Freedom Act back in nineteen ninety
seven ninety eight, the states were sort of very much
in the backseat. They weren't almost in the car at all,
(03:38):
They just weren't acting. But today they're really driving. But
they're driving policy in a very very heavy handed way,
and in a way that's now become increasingly concerning because
of the cost and the confusing nature of all of
these different laws, regulations, and even definitions. A lot of
these state a eye rules don't even define artificial intelligence
(04:00):
the same way. So that's what is motivated Congress to
move this provision.
Speaker 3 (04:07):
And Scott, thank you again for coming on. It's it's
great to get your perspective. I saw already and for
for the listeners, we on the back end can see
one another. And I could tell Scott at times perhaps
you didn't quite agree with that characterization of what the
lay of the land looks like right now. So so
from your vantage point, what's this moratorium look like? What
(04:30):
what kind of gave rise to it?
Speaker 4 (04:32):
Sure?
Speaker 5 (04:32):
Well, thank thanks again for having me on. I'm anticipating
a very interesting conversation here.
Speaker 3 (04:39):
Yeah.
Speaker 5 (04:40):
So, I mean, I think Adam's description of where it's
coming out of is fair and certainly wouldn't want to
quibble with Adam's sort of understanding of what happens sort
of in the in the you know, in the back
rooms of the government, you know, the one the one thing.
I'll all, well, there's a couple of points I guess
I'll make you know. I am very sympathetic to the
(05:03):
concerns about a patchwork, and there's been a lot of
attention recently on the fact that, according to I think
it was like multi state that there have been a
thousand bills that have been introduced this year, and at
the state level, that seems like a huge number of bills,
and actually it is just a huge number of bills.
They range widely. I'll say, you know, there's a huge
(05:25):
difference though, between what's introduced and what is actually passed.
Now that might be little consolation to Adam or Kevin
to you that you know, the states have only passed,
you know, a dozen or a couple dozen bills. But
I just want to like make sure that we're like
all sort of starting at the same point, which is
we're not dealing with a thousand laws. We're dealing with
(05:46):
a couple dozen laws that for the most part, especially
what's passed this year, have been actually pretty targeted on
specific or intent targeted intended to address specific consumer harms. Now,
to my mind, I mean, I'm sure we'll get into
this that there's a couple different sort of debates to
(06:06):
have here. One is this sort of big picture debate
about the value of AI regulation sort of large at
this moment in time. The second is maybe the slightly harder,
but they're both hard, but maybe the more interesting one
or most directly relevant, which is like, who should have
a seat at the table when we talk about regulation.
(06:27):
Should it just be left completely to the federal government,
as this moratorium more or less tries to do. Should
we include the states as well? What role should states play?
And what are the relative sort of trade offs involved
in these different sort of options of excluding the states
or allowing the states to trade a role?
Speaker 3 (06:44):
And I want to dive into those meaty questions in
terms of thinking about the timing of regulation with respect
to emerging technology, thinking about the institutional capacity and the
constitutional role of different actors taking on different tasks. But
one thing I want to kind of flesh out a
little bit more is your point, Scott. Yes, there may
(07:05):
be a thousand proposals, and there's a vast difference between
a bunch of states thinking about EUAI Act light or
EUAI Act equivalent bills that are never going to see
the light of day. Let's say we have those, right,
we could have theoretically fifty bills that look just like
the EUAI Act that never get enacted. And yet we
(07:28):
have had some states namely Colorado move forward with comprehensive
AI legislation that, if emulated by just one or two
could theoretically begin to create the sort of regulatory framework
and patchwork that folks are concerned about. So, Adam, I'll
come back to you. Can you tell us a little
(07:48):
bit more about the interesting conversations going on in Colorado
right now? They pass this kind of first in the
nation leading comprehensive AI Act. What the debate and what's
the status of that act right now?
Speaker 2 (08:03):
Yeah?
Speaker 4 (08:03):
Sure. So that's a major bill that passed last year
that Governor Jared Polus signed, and that bill is an
attempt to try to address so cul currents about concerns
about algorithmic discrimination, and it includes a wide variety of
definitions and standards for how they're Basically, developers have to
make sure that when there's consequential decisions and substantial factors,
(08:26):
they must exercise reasonable care for high risk applications and
all of these things you could bear quotes around, because
basically it's left to the different states to define these
and they do define these things differently, And then there's
different classifications for where developers versus deployers versus distributors fall
in the liability chain there or the regulatory system. That's
(08:47):
what creates a lot of confusion, so much so that
when Governor Polis signed the law, he signed it with
an issue a statement that was astonishing. It read more
like a veto statement and out and I want to
make sure I quote it correctly, that his law that
he has passed quote would create a complex compliance regime
for all developers and deployers of AI and significant affirmative
(09:10):
reporting requirements. He then more astonishingly called upon Congress to
enact a quote needed cohesive federal approach. And then just
two weeks ago, he came out in favor of the
moratorium that Congress is considering, although he said he'd like
it to be a bit shorter. This is a big deal, obviously,
Colorado leading on this. It's been out there for a while.
(09:33):
There's been so much confusion that they actually had a
special study committee for it. They could not come up
with answers to a lot of these confusing questions about
how to define the things that I just mentioned. And
there's even been calls by some to say we need
a special session in Colorado to delay or consider the
law again. And laws like the Colorado law have been
pending in many, many, many states across the nation. A
(09:56):
lot of them have been slowed or veto, even as
the case in Virginia has been and Texas and other
states and Connecticut have revised their laws. But at the
end of the day, this gets back to the point
about the confusing, costly patchwork from hell. I mean, we're
talking about a lot of different types of approaches and
standards to something that we want to develop a national
(10:19):
marketplace in these technologies to deal with the fact that
we're in an international race with China and the rest
of the world for AI supremacy. So that's what's going
on in Colorado. But as Kevin is you and I
pointed out in a piece for Law Fair just a
couple of weeks ago, it's not the only type of
law pending out there. It's just one variety of which
there are many different sub flavors. But then there's a
(10:41):
whole different set of classifications of AI regulations, like things
passing or moving in California last year or New York
this year that have to do with model level regulation.
And then there's very specific types of sectoral rules that
are pending or issue specific things for AI and fill
in the blank AI and education and AI and elections
A and whatever, and so you know, we're talking about
(11:04):
the most comprehensive sort of approach to regulating a new
emerging technology in the history of the United States. And
that really is big begging for congressional intervention of some sort.
Speaker 3 (11:18):
And we know that there are a wide variety of
perspectives on both sides of this issue. And I'm not
even sure it's fair to say there are just two sides.
This is a smorgus board of opinions and perspectives. So Scott,
I know we're unfairly placing you in the role of
representing that side, or perhaps the con in the question
(11:42):
of should we move forward with this moratorium or not.
So kudos to you for being a good sport. You
do not have to be to answer for everyone that
may not be in favor of this moratorium. But just
so we can flush out some of the perspectives here,
do you agree with the general sediment that, okay, if
we had fifty versions or even five versions of the
(12:04):
Colorado AI Act, that might be a problem. What I'm
getting at here is where do you think the proper
role of the states is do you want to see
that sort of comprehensive model regulation that we may see
in Colorado or in New York where they're considering the
Raise Act, for example, or are you contemplating a more targeted,
(12:25):
specific regulatory role for states.
Speaker 5 (12:28):
Yeah, so thanks for saying that. You're absolutely right, like,
I am certainly not the right person if you're looking
for a strong defender of the landscape of state AI regulation.
To be honest, the vast majority of bills that I'm
aware of I think are kind of dumb and quite misguided.
And you're absolutely right, like, I'm deeply worried about a patchwork.
(12:50):
I'm deeply worried about the impact that misguided regulation will
have on innovation. But the more the more accurate to
say on differential impact on innovation, I'm really quite concerned
about how certain regulations might disproportionately impact smaller companies rather
than bigger ones, worsening issues of competition dynamics. So yeah,
(13:18):
all that being said, right, Like, I'm not I'm not
the strongest defender here of of states in AI, But
that being said, right, I hear a lot of arguments
that begin like this, AI is a once in a generation,
once in a thousand year technology that is very new,
(13:41):
so new that how can we really understand not only
what it will look like, but how we should regulate it.
That is an argument that is a starting place that
I see on all sides of the debate. But to
me it seems okay. If that is the case, then
why would we want to foreclose our ability to understand
(14:02):
different regulatory frameworks and to try them out? At core,
my position here is I want to see the states
involved because I think the best way to come to
a productive regulatory framework is to try things out in practice.
I've written a lot about the value of experimentation, about
things like regulatory sandboxes, and I think that's because regulation
(14:25):
is really hard and almost always has unintended consequences. We
can debate the pos and cons of a piece of regulation,
but until you try it out, it's really hard to
know what it's about. Given that, I think there is
a role for states to play in AI regulation, and
you're right, like, I don't know. I don't think states
(14:45):
should be doing things like ten SB ten forty seven
from last year. Right Significant Regulation at the Foundational Love
Foundation models right.
Speaker 3 (14:55):
Things like Colorado.
Speaker 5 (14:57):
I don't know. I have some real concerns about that.
I think there is a role for states to be
trying out, testing out different sorts of regulations, especially when
the federal government has not done a whole lot, and
especially when there's a huge amount of concern across this
country that there are significant harms that need to be
addressed through regulation. I think there's a ton of debate
(15:18):
there right, and regulation needs to be very careful and thoughtful,
but it needs to do a good job of balancing
the different trade offs at play. But that doesn't mean
that there should be no state involvement there. I think
that's sort of where I end up.
Speaker 3 (15:35):
And so to stick with you for a minute, Scott,
as you're pointing out this in many ways just comes
down to a line drawing exercise. Where do we drive
the line of where the proper role is for states
in the regulatory scheme versus the federal government in the
regulatory scheme, And there we run into a lot of
headaches because one state bill, for example, could hit on
(15:58):
various aspects of the AI stack, where it's regulating developers,
it's regulating deployers, it's regulating users. So a single bill
could raise a lot of issues, right, that then gets
passed and we say, well, where does that federal preemption
fall in? Which of these are preempted, which of these aren't.
(16:18):
So it'd be great to know. Is there a definitive
line you would draw where you would be comfortable with
a moratorium. That's a great question.
Speaker 5 (16:29):
I don't know. I've done a lot of soul searching
in the past week, you know, or two weeks, and
had a lot of these conversations. And I had a
good conversation with my co author on the on the
WAPO editorial, Di Zeve, two days ago, and he, I
think convinced me that there would be some sense in
a moratorium on foundation models, on a regulation of foundation models.
That makes some sense to me, right, you know, I
(16:51):
mean honestly, Well, again, we can have a big picture
of conversation about like the value varied regulation that I
should regulate, but there als is this conversation about like
this particuler provision in this bill, right if it weren't
ten years right, which is an incredibly long time period. Right, Adam,
you're talking about the Internet Text Freedom Act. I think
what you would know when it was originally signed, wasn't
it three years? And then it was extended in Yeah, yeah,
(17:16):
so you know, ten years is really long time, especially
in a place when like we all know that the
federal government is let's just say, very slow here, you know,
if there were if it were a little bit more
targeted in specific right now, it covers AI models AI systems,
(17:36):
which it defines is just about any system that uses
I think in whole or in part AI and anything
that uses I can call it algorithmic.
Speaker 3 (17:49):
As I say.
Speaker 5 (17:51):
Here, I'll get the exact language artificial intelligence models, artificial
intelligence systems, or automated decisions decision systems. That covers a
lot of ground would certainly apply to things like social
media platforms for all we know. It could apply to
the ability of state governments, for example, to make rules
(18:13):
about procurement, right, so, you know, so just anyway, all
I me to say is that like there are there
are the big picture questions about like what is the
relationship between innovation and regulation? Who do we want to
be having a seat at the table. There's also like
very two fine grained ones about like does this provision
like you know, pass muster as far as you know,
(18:35):
leading you know, leading us in a good place in
regards to regulation.
Speaker 3 (18:39):
Yeah, let's hear you're retort here.
Speaker 4 (18:42):
Yeah, I'd love to comment on this, because, look, these
are issues I struggled with for a long time. About
twenty eight years ago, I wrote a book for the
Heritage Foundation called The Delicate Balance Federalism, Interstate Commerce, and
Technological Freedom, something like that, and it was it was
my effort to have an open, sort of soul searching
about where I draw these hard lines in the world
(19:02):
of emerging tech when it comes to constitutional first principles.
And I'm someone who's usually arguing for devolution and minimizing
federal power, but I also had to admit then and
now that we have to have a serious conversation about
what constitutes interstate commerce in the technological age that we're
living in. And you know, algorithms like digital bits, they
(19:26):
don't stop at state borders, they cross them. And you know,
it's very important if you want to create a robust
national marketplace in this sector or others, you've got to
think hard about whether or not state should have the
powers to put up borders to stop bits or algorithms
from crossing state borders. Now, I understand that there are
(19:48):
real world outcomes that affect people in the States and
we need to address them. There are certainly legitimate concerns
about AI related harms, but again, this moratorium does not
effect generally applicable laws that would go after people who
in very specific context utilize algorithmic systems to do harm
(20:08):
to others, whether it's discrimination, civil rights, unfair and deceptive practices,
various other types of tortious harms. We have a big
toolkit to deal with these things, obviously admittedly in more
of an ex post than an ex anty way. That's
a really key part of this debate. Do we take
a European style x anti precautionary approach up front and
(20:29):
have all of these so called guardrails by design. We
did not do that for the Internet, and some people
think that was a mistake and they say we can't
let allow happen to AI. What happened to the Internet?
And what I always ask them back, like, what do
you mean we should have had prior restints, we should
have had licensing of in the Internet and social media?
Is that what you wanted? I think we need to
be cautious about allowing those lines. If they're going to
(20:51):
be drawn to be drawn at the state and local
level in the form of hundreds of not thousands, of different, confusing,
costly approaches, because that does ham ravfications for the interstate marketplace,
which Article one, Section eight, Clause three, and many other
provisions of the Constitution and much case law has clearly
stated is in the realm of Congress to address. And
(21:12):
so I'm there, I'm there right now, and I'm saying, look,
we either need to have some sort of a national
framework with clear preemptive preemption and then carving out where
states do have a role, or we need to have
a moratorium to put a pause on this sort of aggressive,
over zealous regulatory activity that honestly, most states should not
be engaged in.
Speaker 5 (21:30):
Yeah, I mean a couple of things.
Speaker 3 (21:33):
Yeah, if I can follow up with Adam briefly, because
one interesting response both to the article you and I
penned as well as to a piece I wrote for Reason,
was this really fervent pushback that states should be allowed
to adopt some voluntary compliance standards that this for example,
(21:56):
on model development, just saying hey, these are the best
practices we want you to follow. And something that i'd
love to get y'all's perspective on is all right, Let's
imagine California comes up with the just gonna make up
something hypothetically the safe AI Model certificate, right that has
been verified by Stanford or run through the best scientists
(22:21):
in California, and now they will certify certain models as
California Safe. But it's all voluntary. There's no requirement that
you have to go and get this stamp of approval.
Is that a good thing? Is that a bad thing?
According to do A lot of folks on X oh
(22:41):
that should there's no problems with that. It's totally up
to you, Scott. Why don't we start with you? How
do you feel about this voluntary stamp of approval? Is
that going to raise any red flags for you from
a kind of innovation standpoint? Or is that somewhere where
you think stage canon should play role?
Speaker 5 (23:00):
Yeah? I I sense from Adam's chuckling darkly in the
as you're asking the question that there's going to be
a strong opinion here. But but no, I mean, like, well,
first I'll say like it seems like the moratorium is
written that it would cover something like that, right, I mean,
(23:21):
is my unless you for to correct me on my
reading of the moratorium text, and I guess that's why
you're asking the question. But no, I mean, yeah, like
I I'm interested to here, but Adam has to say
but like, no.
Speaker 3 (23:33):
I I.
Speaker 5 (23:35):
Don't have a strong issue with that, and I think,
like the the well, I hope we get later on
to to focus more squarely on this question of the
relationship between innovation and regulation. I think that seems to
be sort of this key undercurrent and a lot of
the discussion around the moratorium and I think AI or
(23:58):
tech regulation more really and anyway, Yeah, so I was say, yeah.
Speaker 3 (24:04):
We will circle back there. But yeah, Adam, what's your
team look.
Speaker 4 (24:07):
I mean, when you talk about voluntary standards, you got
to ask how voluntary is voluntary when you know the
state government in California no Less is coming down hard
on you, saying like we strongly encourage you to voluntarily
do these things.
Speaker 5 (24:19):
MM.
Speaker 4 (24:20):
I don't know. I'm wondering if that's legit. And yes,
I do think the moratorium might have something to do
with that, if it somehow took on some sort of
force of law, But we'd have to debate what we
meant by that. In a quote unquote voluntary sense. But
realistically we don't need to talk about that. We just
need to talk about what's actually going to be required
by California and these other states, which has been pushed
(24:40):
and like whether or not, you know, a moratorium should
cover it. And I really do think obviously there are
clear ramifications for interstate commerce. And you know, we're doing
this podcast just twenty four hours after there's a major
overturning of like ev requirements in California for the national
marketplace and California, the so called Sacramento effect has driven
(25:00):
national policy on a number of fronts, and I don't
think that's good. I mean, you could have a better
debate about doing that in the context of labor or
environmental rules. But when we're talking about interstate algorithm and
commerce and the internet things like this, I think it's
a far more clear cut case that you absolutely have
to have a national overlay. It doesn't mean that you
can't have some carve outs from it, you can't have
(25:21):
some exceptions that maybe Congress gets around to once they
craft a national framework. But the reality is is that
we don't see that ground to the states just saying like, oh, well,
it's laboratories of democracy quote unquote, it's more like laboratories
of destruction of an interstate marketplace. And Congress is entirely
within its right to set some ground rules for that.
They did this in the Telecommunications Act to clean up
(25:43):
one hundred years of state and local laws that made
a nightmare for intrust, state commerce, and innovation and investment.
And we finally got around to correcting that. And I'm
glad we did not allow that to happen to the Internet,
to have fifty different federal computer commissions or Internet commissions,
and I don't want fifty AI commissions. I think we
(26:03):
have to have a strong national framework here if we
want this to thrive the way that we I think
most of us want us to do.
Speaker 5 (26:10):
Yeah, and what do you say to people who say
a strong national framework is would be great. It's just
not happening.
Speaker 4 (26:16):
I think that's true at the short term. But of course,
part of the argument for those who support the moratorium
is it'll sort of light a fire in our Congresses.
But to do more on this front. I understand the
people snicker at that and say, look at privacy, look
at these other things. It's true, but you know, there's
this argument that like, Congress should not be setting up
any sort of rules of the road when they don't
have a clear replacement. I'm sorry, but that does happen.
(26:38):
It does happen, first of all, when any industry is deregulated.
Look at like when we deregulated airlines. We didn't say
we have to therefore have price controls now at the
federal level because we're disallowing at the state. No, we
just said, no, this is this marketplace is going to
be free, it's going to be open, it's going to
be deregulated now with AI. I'm not saying that we
do need some rules, but we need to think about
hard like about which one should be state and local
(27:01):
and which ones should be national, and some I think
should be clearly taken off the table, the easiest case
being model level regulation. But I think you can go
further and say that basically a lot of other laws,
including like Colorado's, would interfere with the interstate marketplace in
a pretty big way, as even the governor of Colorado
himself said in calling for a national framework. So I
(27:21):
do believe we need to have something like this in
the short term.
Speaker 5 (27:24):
Yeah, But This is not that's not what this bill is,
right like, that's not what this provision is. This isn't
a scalpel, right like picking apart trying to figure out
what is the best sort of balance of state and
federal regulation. This is saying states should play no role whatsoever.
Speaker 4 (27:44):
Well, let's be clear. It is saying that if you
are going to do technocratic AI specific regulation, yes, that's true,
we're going to have a moratorium on that. If you're
doing again, let's read from the rule of construction, if
you're doing things that are requirements imposed under a generally
applicable law, imposed in the same manner on various systems,
(28:05):
you're in the clear. Now. We can have a debate
about what that means, but it clearly is saying that
there are laws that can apply, you just can't do
technocratic AI specific types of rules that would impact the
national marketplace. I do understand that's hard line drawing, but
I want to be clear it's not quite as blanket
as you're painting it out to be.
Speaker 3 (28:26):
So as we're moving to the close of this, I
think it is useful to zoom out at a higher level.
As Scott was indicating earlier, about just trying to get
a sense of where the proper line is to draw
in a different context, just on this debate about the
pros of regulation and the cons of regulation with respect
(28:47):
to either furthering or inhibiting innovation. So, Scott, why don't
we start with you? What's the argument that regulation properly
crafted can actually accelerate innovation or what would that look
like in an AI context?
Speaker 5 (29:04):
Yeah, well, I think there's there's a few different arguments here.
One is to say, what is the relationship between regulation
and innovation? What is the what is the relationship regulation
and innovation for AI in particular? Frankly, we don't know,
right Like AI is really new. We have very little
empirical data on how different types of regulation may or
(29:26):
may not impact AI. Like that's just how things, That's
just how it works, right Like, we're always sort of,
you know, empirical analysis always going to be like a
little behind the curve from other sectors. You know, it's
a bit of a mixed bag. I think there does
seem to be decent empirical evidence that in certain cases,
innovation can be significantly harmed by over regulation, especially in tech.
(29:55):
In other areas, in some areas of tech like green tech,
we see sort of the opposite happening, where certain regulations
can actually push forward certain types of innovation. I think
my sort of just like general takeaway here is that,
like it's complicated, and we don't have a lot of
great empirical literature on this really kind of complicated relationship
(30:17):
between regulation and innovation. But on the other side, I
think too often we over prioritize innovation without recognizing that
there are other competing values at play right out in
the world. Like sure, innovation is really important, and we
want to make sure that we're we're innovating in this
like exciting new area, but there are other values that
(30:40):
we also need to pay attention to. Safety, consumer safety, justice, competition, quality,
user experience. These are all things that we shouldn't sacrifice
entirely at the altar of innovation. And I think that's
the that's I think where where I've ended up here,
which is to say, we want to make sure we're
innovating right in the right ways, but also it needs
(31:03):
to be in dialogue with these other with these other
trade offs, And frankly, I'm okay if innovation is slightly impaired,
if we're talking about a potentially world ending technology. I'm
not quite sure I buy that like the really sort
of like existential level risks about AI, but certainly it
is a revolutionary one. And you know, there seems to
(31:25):
be wide concern not only amongst you know, folks working
in tech, amongst policy experts, but also amongst the sort
of like general public, that they have a lot of
concerns about about about AI. And so I don't think
it's so obvious to say, like, not only do it
must we consider the impact that regulation is on innovation,
but we must also balance it with these other competing partners.
Speaker 4 (31:47):
I just want to be clear that I agree that
we need to put into perspective the fact that innovation
isn't the only value. And I'm not saying that it's
an important value, and innovation kind of course save lives
and lead to great prosperity for civilization, but the reality
is that harms do happen and we need to have remedies.
And I'm not disagreeing with that. But what I'm saying
(32:09):
is that first and foremost is that we need to
make sure that we tap existing tools to dress those
that will be more technology neutral and not interfere with
the development of a national marketplace. And I just want
to quote someone here who was not exactly what you
would call a maga republican. This is the very liberal
Massachusetts Attorney General stating last year that quote existing state
(32:30):
consumer protection, anti discrimination, and data security laws apply to
emerging technology, including AI systems, just as they would in
any other context. And I already went through the litany
of other types of rules and regulations that are applicable
to AI general laws. I'm such a huge fan of
what Richard Epstein want was famously entitled in one of
his books, Simple Rules for a complex World, using generally
(32:53):
applicable law instead of technocratic specific types of regulatory strikes.
So that's my first point. My second point would be, yes,
there will still be other concerns and harms and maybe
even requiring preemptive approaches, but I still believe this is
probably better focused at the federal level with a national
framework and then maybe Congress working with the states to
(33:14):
figure out where their proper role is in other contexts.
I understand the moratorium preempts a lot of that, but
I think that's necessary. And then we get to the
discussion about how to get into the detailed weeds of
how to deal with those things.
Speaker 3 (33:29):
Just to exercise the moderator's prerogative for a second, because
I think there's another factor that doesn't get discussed enough
in these debates, which is institutional capacity. There's an assumption
that states can enact these laws and tomorrow suddenly there
are one hundred and fifty AI experts or AI auditors
(33:49):
who can just relocate to New York and suddenly enforce
the RAISE Act, or tomorrow Boise, Idaho is going to
be a hub of AI expertise. I just don't see it.
And so this institutional capacity question that doesn't get asked
enough is, yes, you can have great regulation. It can
be well crafted, but it can be horribly horribly enforced
(34:12):
in a way that does hinder not only innovation but
also the values we're talking about. Where As we've seen
from the EUAI Act, when you don't have proper enforcement
regimes set up that are well funded and include experts,
what ends up happening. Will you get kind of arbitrary
enforcement where enforcers are just going for whoever looks like
(34:34):
the most egregious actor because they're in the headlines, or
what have you. But they're not actually enforcing the law
in a way that would align with the rule of
law itself. So important consideration there, Scott, Let's have you
respond and then we'll quickly go to quick forecasts. What's
the Senate going to do? So Scott, go go for it.
Speaker 2 (34:55):
Yeah.
Speaker 5 (34:56):
Well, I'd be remiss if I didn't use this as
a plug for a piece that Kevin, you and I
are writing together on. So we're actually trying to address
this question of how exactly do existing new DApp laws
or maybe we'll see how the piece develops wider consumer
protection laws at the state level, how do they cover
the sort of AI harms that people are concerned about?
(35:16):
And I certainly don't want to like, you know, get
it ahead of ourselves here, but I would say, like
for me at least, it's not super clear that a
lot of these laws have some ambiguity, certainly on the
unfair or you know, side of things. Right, the deception
(35:39):
is maybe a little bit easier that like there are
it's sort of it's sort of clear what might be
covered in the deception size the unfair side of things
might there might be sort of more room for Kevin
as you said sort of selective prosecution here, So on
that side, I'm actually I think that additional regulation may
(35:59):
help clarify the situation rather than make it more difficult.
But we'll have to sort of see how that one
plays out. I mean, but in the bigger sense of like,
do state existing state laws cover the harms that people
are concerned about, It's even aside from that, it's like
sort of right, there are a number of specific harms
(36:23):
that are completely outside the scope of existing consumer protection
or like UDAPP laws, So things like you know, disclosure laws, right,
there's no reason to think that existing consumer protection laws
would require disclosure when you know a company, for example,
(36:44):
insurance company uses an AI model to make a determination
in a medical insurance case. That is something that a
lot of people are deeply concerned about, you know. Or
algorithmic discrimination if it doesn't already cover detected classes. Right,
there's this problem I think we run into where the
(37:04):
phrase algorithmic discrimination, like people associate that increasingly narrowly with
like discrimination against protected classes. That is something, of course
we need to be concerned about. But we do have
laws that apply in those cases. But there's this bigger
literature on algorithmic discrimination. That says that often algorithms or
(37:25):
AI systems maybe discriminating in ways that we wouldn't anticipate
or against groups that we don't even recognize those groups, right,
so maybe it's discriminating based on zip code or on
hair color. Those things would not necessarily be covered right
under existing sort of equal protection laws, but are deeply concerning.
(37:47):
So I think it's not exactly clear where the again
the boundaries about, like what existing state law covers in
what it doesn't mean.
Speaker 3 (37:58):
So obviously we still have so much to discuss, but
we're running into the friday of a holiday weekend, so
we're going to have to wrap it up here. Adam,
any final word a prediction about what the Senate will
do or one thing you'll be keeping an eye on
going forward.
Speaker 4 (38:13):
Well, a lot of people fear that this is going
to get tripped up on parliamentary requirements, basically saying that
it has to be germane to a budget bill. This
is all part of a reconciliation kind of package and everything,
So that's going to be potentially tripped up before we
even get to a substance of debate in the Senate
about this moratorium, but the more tim could come back,
and Senator Cruz has said that he potentially would incorporate
it into a broader AI bill. He's considering a so
(38:35):
called AI sandbox bill. We haven't seen that language yet,
but it's entirely possible that sometimes in midsummer we have
a debate about this or coming back around to that.
Speaker 3 (38:43):
Idea excellent and scott any final word, No.
Speaker 5 (38:48):
I mean, I think Adam is probably much a much
better sense of what the future might hold here. I'll
just say, I mean, I'm looking forward to, you know,
Senator Cruz's sort of own bill. To me, that seems
like a much more sort of appropriate venue than kind
of stuck within this massive sort of budgets, so that
we at least have the time to actually have a
(39:10):
real debate about this provision. And if Cruz sort of
attaches the moratorium to an actual framework, maybe that'd be good,
right if we can, if we can preempt state law
with something, that's better than preempting it with nothing at all.
So I'm very good to see sort of where this
goes in the next few months.
Speaker 3 (39:29):
Well, folks, you can definitely expect another pod on this
topic down the road, but for now, Thanks to Adam,
thanks to Scott and Libby. Thank you again for hosting.
Speaker 2 (39:37):
Us absolutely, Thank you all so much for joining us
today and for sharing your insights. That was a fantastic
discussion and a little bit of a debate there. It's
great to listen in. Thank you to our audience for
tuning in today, and if you're interested in learning more
about all of our programming here at OURTP discussing the
regulatory state in the American way of life, please visit
our website at redproject dot org. That is OREG project
(40:01):
reg project dot org.
Speaker 5 (40:02):
Thank you.
Speaker 1 (40:09):
On behalf of the Federal Society's Regulatory Transparency Project. Thanks
for tuning in to the Fourth Branch podcast to catch
every new episode when it's released. You can subscribe on
Apple Podcasts, Google Play, and Speaker. For lays from our TP,
please visit our website at Regproject dot org. That's our
egproject dot org.
Speaker 3 (40:37):
This has been a FEDSC audio production.