Episode Transcript
Available transcripts are automatically generated. Complete accuracy is not guaranteed.
Speaker 1 (00:02):
Welcome to the Regulatory Transparency Projects Fourth Branch podcast series.
All expressions of opinion are those of the speaker.
Speaker 2 (00:19):
Hello, and welcome to the Federalist Society's Regulatory Transparency Projects
Fourth Branch Podcast. My name is Elizabeth Dickinson and I'm
an assistant director with the Regulatory Transparency Project. As a reminder,
all opinions expressed are those of the speakers and not
of the Federalist Society. Today we will be discussing various
state legislative efforts on AI. We're excited to have with
(00:40):
us a panel of legal and technological technology experts. First off,
we have Kevin Frasier, an AI innovation and law fellow at.
Speaker 3 (00:48):
The UT Austin School of Law.
Speaker 2 (00:50):
We have Sunny Gandhi, the vice president of Political Affairs
at ENCODE, And last, but certainly not least, Dean Ball,
research fellow at the Mercadas Center with George Mason University.
Speaker 3 (01:01):
Thank you all so much for joining us today. I'll
hand it off to you Kevin.
Speaker 4 (01:04):
Livy, thank you so much for the kind introduction, and
we'll go ahead and start diving into the weeds.
Speaker 5 (01:11):
As the federal.
Speaker 4 (01:12):
Government nears a potential government shutdown, now's a good time
as any to turn our AI policy focus to the states.
If states are a laboratory for democracy, then they are
most certainly a mega factory for AI regulatory proposals. In
the first quarter of twenty twenty five, more than seven
hundred AI related bills have been introduced across state legislatures,
(01:35):
many drafted by politicians who may struggle to distinguish between
a neural network and a social network. Today we have
two experts to help us get a sense of whether
the regulatory fervor at the state level aligns with America's
AI goals or threatens to undermine its current status as
a global leader. My hunch is that our guests may
(01:56):
not always see eye to eye, but that's precisely the
point of this podcast, making sure folks have a chance
to explore policy nuances rather than political platitudes. So big
banks to Sunny and Dean for joining us. Dean, let's
start with you. There's a danger in policy conversations to
assume that more law is the answer. Carl Llewellen pointed
(02:17):
out long ago that we're trained in law school to
just think law, law, law, and if we don't see
enough law just adds more laws. So can you give
us an overview, Dean, of why state legislatures feel as
though there's a lacuna in regulation when it comes to
governing AI. What are the threats they're concerned about that
aren't already covered by existing law?
Speaker 6 (02:40):
Yeah, oh, thank you for thank you for having me.
And you're certainly right that sort of adding more law
incrementally over the years is the history of America, and
we don't tend to go back and revise those laws.
So one symptom of that is that I'm not sure
how many policymakers in fact know what the law is.
(03:04):
It's non trivial to find. I'm always amused, you know,
Open AI and their instructions to their model says follow
the law, Like okay, well, I guess you really are
planning to build super intelligence soon because I don't know
what that means. But I think you know, there are
a couple different motivating factors here. One of them is
(03:28):
this intuition that I think you hear on both the
left and the right that we don't want another social
media or another Internet. There's a general perception, and I
think this perception is wrong. But the perception persists, nonetheless,
that policymakers somehow missed the boat on quote unquote regulating
(03:50):
the Internet, and they are implicitly positing there that there
is some sort of law one could have passed that
would be constitutional in the United States of America that
would make it such that social media didn't do the
various bad.
Speaker 3 (04:04):
Things that happened.
Speaker 6 (04:05):
And of course the left and the right disagree on
what the bad things are, but everybody agrees that there
are bad things, at least in the policy making world.
So I think there's that number two. There's legitimate issues.
There are things like deep fickes, which in certain cases,
in fact, depending on the state, are maybe not malicious
(04:26):
uses of deep fakes are maybe actually not uniformly covered
by existing law. And then there's kind of also this
pre existing AI policy conversation that originated really in the
European Union. You know, it's worth remembering that the EU's
AI Act started years before chat GPT. Those conversations and
(04:48):
the concerns at that time were about things like algorithmic
bias sort of AI that could violate what the Europeans
call fundamental rights and what we call civil rights, things
like facial recognition, algorithms things of that kind, and those don't.
(05:10):
Those concerns have kind of continued to manifest themselves in policy,
and then chat GPT sort of came on the scene
and was an inconvenient innovation for this policy world because
in many ways I think those frameworks apply problematically. So
I think it's like those three things put together. And
then also the general I think political valance that is
(05:32):
again bipartisan of not particularly trusting the technology industry and
big tach especially.
Speaker 7 (05:39):
I think it's really taliing Dean, as you pointed out
that when the Open AI says follow the law, there's
a big set of questions that arise under well, what
laws in what context, for what reasons, and who's enforcing it.
Speaker 4 (05:52):
We've seen state attorney generals across the country have to,
for example, provide guidance about how they think the law
may have to AI. So it's important from the outfit
just acknowledge that following the law isn't always as easy
as we expect, but that doesn't mean that the law.
Speaker 5 (06:10):
Doesn't apply in those contexts.
Speaker 4 (06:12):
And Dean, you also did a nice job teeing up
what I'll attribute personally to Adam fear this idea of
there being multiple buckets of AI policy, right there's model
level AI policy, there's conduct or discrimination or bias focused policy,
and then there's kind of sector specific policy. Sonny, you
(06:34):
played an active role in advocating for California's SB ten
forty seven, a bill that was ultimately vetoed by Governor
Newsom that was arguably the poster child for a major
AI regulatory initiative.
Speaker 5 (06:48):
So what's your.
Speaker 4 (06:48):
Pit for states being leaders when it comes to not
only addressing some of these discrimination concerns or bias concerns,
but even more comprehensive pieces of AI ledge akin to
what we may be seeing in the EU right now.
Speaker 8 (07:04):
Yeah, and first of all, thank you so much for
having me really excited to be here. I was actually
going to jump in if you hadn't asked a question
like this, because I think this is one other thing
that Dean didn't cover in.
Speaker 3 (07:16):
Why states are so excited to work on this.
Speaker 8 (07:19):
I think there is a really strong sense that Congress
generally doesn't do anything, but they haven't done anything on
tech policy for years. The most significant piece of tech
legislation that's been passed in the last couple of years
was the TikTok divestment, which I don't think people put
in the same bucket as the kind of social media
(07:42):
regulation that Dean is talking about.
Speaker 3 (07:44):
That everyone is very upset that they missed the vote
on or kind of kid safety.
Speaker 8 (07:47):
Proposals or things like that, and so I think there's
a real sense in the states that, man, it has
been decades since Congress has done something about this.
Speaker 3 (07:58):
California notably had their.
Speaker 8 (08:02):
Data privacy bill and they were being told that, no,
the federal government is going to take care of this,
We're going to pass a bill. It's twenty twenty five.
We still don't have a federal data privacy law on
the books. There are many such instances of where the
states are just frustrated that the federal government has dropped
the ball on doing anything on these issues. Now, you
can agree or disagree on whether or not you think
(08:25):
the government should have done anything, whether Congress should have
passed any laws, but I think the sense that we
want movement to happen, and it is not happening at
the federal level, and so it is our responsibility to
now do something about this is very much there, and
very much the thing that a lot of people are
feeling in states. I think California again is a special
example where people feel this even more, where I think
(08:48):
the California government is really excited to take on large,
major regulatory initiatives or projects in a way that other
states might not be willing to. I think that really
kind of should frame where this energy is coming from.
People think that the federal government is broken, that the
congressional process is too good, lucked and won't move, and
(09:11):
so now they're looking to states to figure out, Okay,
you still get things done, and you still are willing
to work and try to pass laws. So what can
we do in the context of your jurisdictions and your
ability to legislate?
Speaker 4 (09:27):
And so, just to double down on this idea of
states perhaps leading on the governance of these leading AI models,
focusing on frontier models, the most sophisticated and developed AI tools.
There's a concern that's commonly raised if we look at,
for example, the California Privacy Protection Agency, which was stood
up in response to the CCPA, it took quite a
(09:49):
while to get off the ground. Arguably it's not having
the greatest success in hiring rapidly, and there are probably
not too many AI experts right now, raising their and
to for example, go work in Sacramento all out SQL Right,
there are other places they could work, they can earn
higher salaries. How do you respond to the critique that,
(10:10):
let's just assume for a second states should be regulating
these frontier models.
Speaker 5 (10:14):
What about the capacity issue?
Speaker 4 (10:16):
Do states have the capacity to lead things like audits
into frontier models, to oversee these licensing regimes that may
be proposed for governing the most sophisticated models.
Speaker 3 (10:28):
Yeah, so this is.
Speaker 8 (10:29):
Something that I have definitely thought a lot about, and
I do think that there are differences here of the
scope of the kind of things that states should be
attacking versus the federal government. So I don't really think
that there is that it is a good world if
every state tries to create its own regulatory agency for AI,
(10:50):
that it would probably be really bad to have fifty
different state regulatory agencies that issue their own kind of
guidance and have their own rule making authority. You end
up with this quite insane pathwork. I think one of
the things in SP ten to forty seven that we
had was originally we started off with that idea we
wanted to create a frontier model for agency and that
(11:12):
would kind of like be the hub to do a
lot of this activity. We ended up pairing that down
because of a lot of the concerns that I just raised,
where staffing is really difficult, getting an agency up and
running is really difficult. It's talent acquisition is hard, it's
questionable if they would be making the right decisions, and
so I think, you know, clarify some of the things
that you'd want states to do is more clarify existing laws,
(11:36):
create pathways for people to report violations of laws, do
things that make it that are more light touch, that
make it easier for regulation down the line.
Speaker 3 (11:48):
To maybe happen.
Speaker 8 (11:49):
But for now, I think it is not great for
states to start creating their own Like I don't think
it's amazing if California and you know, New York, and
then Texas and Florida and Montana have their own agencies
and they're all issuing guidance that maybe relates a little bit,
but as separate, I think that ends up being really
(12:10):
bad for innovation.
Speaker 4 (12:13):
Yeah, Fortunately, there are a ton of AI state legislation
trackers out there, and you can try to just get
a sense of the sparent definitions, for example, of just
the term artificial intelligence.
Speaker 5 (12:23):
There's not even agreement there.
Speaker 4 (12:25):
And so if we see the sort of patchwork that's
characterized the privacy sector and privacy law generally manifest in
the AI context, then I can raise a whole lot
of compliance concerns.
Speaker 5 (12:37):
So Dean, I.
Speaker 4 (12:38):
Wonder if we're talking about light touch regulations at this stage.
Arguably the lightest touch would just be self governance. We
have these labs like Open AI, Anthropic every time they're
releasing a model. Now they're releasing a thorough analysis of
what risks they think the model poses. They're being increasingly transparent.
(12:59):
Why is there an urge to say we need if
Congress isn't going to regulate these models, we have to
have states to it. When it appears as though these
labs are trying to be courteous, are trying to be
responsible actors in this space.
Speaker 3 (13:14):
Well, it depends on the law you're talking about.
Speaker 6 (13:16):
I would say it depends on what the law you know,
the state law that that.
Speaker 3 (13:20):
You know what it's trying to fix.
Speaker 6 (13:22):
So, for example, there are about a dozen algorithmic discrimination
laws throughout the states. One of them has passed in
Colorado and then there's there's a about a dozen more
that are are kind of moving through different states, some
very large states. It's bipartisan. In fact, the Colorado law
(13:45):
is not in effect yet. And these are very very
very complex laws that mirror the European Union's AI.
Speaker 3 (13:53):
Act and are fully general.
Speaker 6 (13:55):
So everything from you know, the way AI is defined,
covers a lot of very very simplistic things that have
been around for decades, and they also cover you know,
super intelligent AI systems of the future. And they put
an enormous compliance burden on both developers and commercial end users.
(14:17):
And my general sense is that every lawmaker who has
introduced these bills are almost every one of them, doesn't
really actually understand what they're doing. They think that they
have a proposal that's been vetted by a lot of
other people because other states are doing it.
Speaker 3 (14:32):
The states tend to be quite mimetic. They tend to
copy one another, and.
Speaker 6 (14:37):
Then they introduce these laws and they hear feedback from
you know, tech companies or businesses in their state, like
the Chamber of Commerce, and they realize that they've really
stepped in it. Just to be quite candid, I think
Jery Paulus is kicking himself about the fact that SB
two a five is law in Colorado. The version of
this they had to create a task force to study
(14:59):
how to implement the law, and after a year of meeting,
the task force could come to no agreement about how
the law was supposed to be implemented. We'll see what
happens in the rest of this session.
Speaker 3 (15:09):
So that's the law.
Speaker 6 (15:10):
That's kind of like Truthfully, I think the reason that
law exists is because some people wanted to law, wanted
a law to exist, and then some other people said, here,
here's a law, this is something, go do this something,
and actually, like, no one, I'm not sure how many
people are like genuinely committed to that framework. But then
I want to separate that from a bill like SB
(15:32):
ten forty seven and Senator Wiener, who I think was
reacting to genuine.
Speaker 3 (15:37):
Problems that he thinks there are.
Speaker 6 (15:41):
In the current system, and I think in particular SB
ten forty seven was a bill about tail risks. It
was a bill about damages that could occur from AI
models that are not currently on the market, that were
not at the time of SB ten forty seven's discussion
and probably still aren't today, that would be you know,
(16:02):
in excess of what was it one hundred million or
five hundred million, I forget one hundred million dollars one
hundred million dollars of damage to property or lives lost.
That's a very different kind of bill. And I think people.
Speaker 3 (16:23):
Can argue about whether or not those kind of catastrophic.
Speaker 6 (16:25):
Risks are are things we should be taking seriously or not.
But certainly you can understand how the current system of
market incentives might not incentivize company frontier lab companies to
to mitigate tail risks effectively.
Speaker 3 (16:42):
That is often a problem with market systems.
Speaker 6 (16:46):
So I think I think that's it's like an entirely
legitimate issue to have at the very least. So I
think I think basically the motivating factors really depend on
on on the laws. But and then you know, there's
also things like deep fakes that are motivated often by
headline controversies. Policy makers will see a deep fake of
another policy maker and get scared. I think there's a
(17:08):
there's a very significant extent to which policymaker interest on
deep fake is motivated by a self protective instinct because
they themselves do not want to have that happen to them,
they are public figures. And then also you know there
are genuinely terrible.
Speaker 3 (17:25):
Things that are happening with deep fis.
Speaker 6 (17:27):
Right, there's malicious sexual deep fakes going around in high schools, right,
I mean things like this that are probably socially destabilizing
at the margin.
Speaker 3 (17:37):
So that's that would be That would be another another
motivating factor.
Speaker 6 (17:43):
But I think it's like it's about as diverse as
AI itself.
Speaker 4 (17:48):
And Sonny I mentioned that there are more than seven
hundred bills out there. We've talked about kind of like
a baskin Robbins of AI policy thirty one different flavors
of legislation going on here. What are the odds, in
your opinion, of any state passing something akin to in
SB ten forty seven.
Speaker 5 (18:07):
Do we see any states where.
Speaker 4 (18:08):
There's been meaningful progress on a sort of comprehensive AI
bill that would perhaps have the sort of Sacramento effect
that people were fearful of under.
Speaker 5 (18:19):
SB ten forty seven.
Speaker 8 (18:21):
So I do think California is definitely unique here, both
because it's where a lot of the major AI labs
are based.
Speaker 3 (18:32):
It's also the fifth largest economy on earth.
Speaker 8 (18:36):
Is it has historically created the Sacramento effect, and so
I think people look to it more than other states
to kind of set the tone for a lot of
other national policy. And so I do think there is
a uniqueness about California that exists and that should be tracked.
I think that again, there are a lot of people
(18:57):
that are interested in doing the like algorithmic impact assessment
bills that Deane is talking about. I know that's his tune.
I think they are I may be a little more sympathetic.
I think they are well intentioned, but I do think
that they are not very clear what their goals are.
I do think this is an example where the kind
(19:17):
of tech lobby that exists in a lot of these
states has been very useful to kind of then educate
lawmakers or get them even if the bills make it
out of the legislator, get them vetoed at the governor level.
Speaker 3 (19:29):
And things like that.
Speaker 8 (19:30):
And so I am hopeful that those kind of like
extremely broad, over encompassing AI style bills will not make
it through. And what you will see is more of
the tailored focuses and more of the extreme risk focus
that wouldn't be covered like an SB ten forty seven,
or maybe parts of it, Like I think you know,
(19:51):
a s P ten forty seven was about three different things.
Mainly one of them was this public computing framework that
it exists that would give money for California to set
up its own compute system that startups or academics can use.
I don't think that part was controversial at all, and
I think a lot of people were excited about that.
Speaker 3 (20:12):
There was a second.
Speaker 8 (20:13):
Part, which was whistle lower protections, which allowed employees of
AI companies to report dangerous activities that they thought their
companies were engaging in privately to the Attorney General. And
then the last part was I think what was the
most controversial, which was this idea of what is the
(20:34):
liability of developers.
Speaker 3 (20:35):
As it comes to these systems?
Speaker 8 (20:37):
And so I think you could break up any one
of these three things into its own bill and that
would still be a pretty big deal of each one
of those pasts and so, and I do feel optimistic
that you will see one of those components pass in
at least one state. I'm hopeful, and we are working
on a couple of states where different components of this.
Speaker 3 (20:59):
Will hopefully make it through.
Speaker 8 (21:00):
But I think there is definitely room for these more
scope provisions and not a massive AIX style bill to
make it through.
Speaker 4 (21:10):
And deem in this odd kind of regulatory world right now,
we have just a whole flew of state legislative proposals. Meanwhile,
at Congress and on the Hill and at the White House,
there seems to be a little bit of confusion right
now about what direction to head in with respect to
AI policy. We are awaiting the AI Action Plan that
(21:35):
was mandated by President Trump's executive order in the early
days of his administration that will likely provide a lot
more clarity. We know from Vice President Vance's speech in
Paris that the administration really wants to prioritize what it
refers to as quote AI opportunities.
Speaker 5 (21:52):
Over AI safety, and Congress right now may be on.
Speaker 4 (21:57):
The verge of passing what's known as the Ticket Down
the Act, which is again focused on deep fakes. But Dean,
you've written extensively on the proper role of Congress in
this federal system when we're dealing with something like AI.
If you were advising a high ranking member on an
(22:17):
influential Senate committee, let's say, what's your pitch to them
for what they should do in the near term when
it comes to governing AI.
Speaker 6 (22:26):
I think the first thing I would do is try
to try to push back on the notion that what
we need here ever, not like in the next year,
but ever, is a comprehensive AI bill.
Speaker 3 (22:46):
I think that that's a fool's errand.
Speaker 6 (22:49):
And I think the fundamental challenge of making a comprehensive
AI bill is that it's a general purpose technology, and
I think it's it's fundamentally a low IQ endeavor. Like
it's just if you look at what the Europeans have done,
it's mass confusion.
Speaker 3 (23:05):
They're probably like there's a lot of countries that are
in Europe that are probably just going to pretend it
doesn't exist or do everything they can to pretend it
doesn't exist when they get down to implementation. So that
would be number one.
Speaker 6 (23:17):
Number two, I would say that you want to you
want to make targeted prudent interventions. So I think something
like to take it down act perfectly reasonable. Right, that's
a basically just creating creating a pathway for legal action
(23:40):
on the knowing distribution of malicious deep fakes.
Speaker 3 (23:44):
It's fine.
Speaker 6 (23:48):
AI Safety Institute would be another you know, the AI
Safety Institute is tasked with testing frontier models for catastrophic risk.
They're not in charge of misinformation police, they're not in
charge of algorithmic bias. They test models for their ability
to create very very dangerous, socially disruptive capabilities, which I
(24:13):
think are a threat to national security. And so absolutely
a federal I mean, there's no point in having a
government if there's a technology that could create the ability,
that could democratize the ability to do autonomous cyber attacks.
There's no point in having a government if the government's
not on top of that risk. I'd rather just not
have a federal government in that case. So obviously the
(24:35):
federal government should be doing that, So the AI Safety
institut should be funded and authorized. That would be another one,
And then more broadly than that, I do think that
there's a lot of things on the closest I think
you could get to a comprehensive bill would be something
on liability, but that's kind of a whole separate can
(24:58):
of worms, and I think probably is like not going
to happen in this legislative session at all. The last
thing I would say is thinking very seriously about ensuring
that the federal government, primarily the executive branch, but also
Congress and also the courts can adopt AI as aggressively
(25:19):
as possible. And again I don't mean necessarily the language
models that we have on the market today, but i'm
if you think a little bit forward, the systems that
we are likely to have in the near future, which,
to be clear, I think will basically be operating at
an expert level in all cognitive domains and able to
(25:40):
use a computer to do anything as a human can
with a reliability in the high ninety percent range. That's
a technology that the government needs to adopt. There's all
kinds of laws that were actually well intentioned, some of
which were well intentioned, that make that adoption harder, and
(26:00):
so there's all kinds of little, tiny tweaks I think
you can make making sure the government has the procurement
ability to purchase these things. It would all be things
like that that are incremental steps. I think the idea
of comprehensive legislation for something that is a a general
purpose technology and b whose capabilities are not entirely understood
(26:23):
right now is just simply a fool's errand.
Speaker 5 (26:27):
Sunny.
Speaker 4 (26:28):
I think a lot of This boils down to the
fact that we perhaps don't have as much public knowledge
about the ins and outs of AI as would be
helpful for a meaningful discourse on these topics. I mean,
folks are seeing podcasts pretending the end of the world,
or they're seeing podcasts that are saying, you'll never have
(26:49):
to work another day in your life, right, You'll just
have an AI agent do everything for you. And so
everything's either dystopian or utopian to the MS degree. So
how does the public play a role in animating this
state legislative fever for AI regulation?
Speaker 5 (27:08):
Is it the public that's really pushing a lot of
these bills?
Speaker 4 (27:12):
Is it legislators trying to get that press release that says,
you know, leader in state legislature, why passes major AI bill?
Speaker 5 (27:22):
What's the pressure?
Speaker 4 (27:23):
What's the animus that's actually driving seven hundred bills coming
up in state capitals.
Speaker 8 (27:29):
So I'm going to go back to what we were saying earlier,
which is definitely that a large portion of these bills
are being driven by I think policy makers that don't
really know what they are doing and are kind of
uninformed about how the technology actually works.
Speaker 3 (27:45):
So there's a couple of things here.
Speaker 8 (27:46):
One is that I think there is a problem of
both the public and also policy makers where they are
over indexed on the November twenty twenty two version of JGBT,
and that's sort of where they are stuck, that this
is where AI systems are at and this is what
they can do. They used it, you know, it came out.
It was a big deal. You have like one hundred
(28:07):
million users sign up. Everyone used it a couple of times.
If you were a student, you were really excited about cheating.
If you were if you were anything beyond that, you
were like, Okay, this is a cool gimmick.
Speaker 3 (28:19):
I don't know how I can use this.
Speaker 8 (28:21):
You maybe tried to put in a work question and
you were like, oh, this sucks, I can't do anything
with it, and then you forgot about the system.
Speaker 3 (28:27):
And I think there is a real problem there.
Speaker 8 (28:29):
Where you can read about it, and you can, you know,
read headlines and see what people are saying. But to
really understand how these systems work and are improving, you
just have to start using them pretty actively and try
to understand where their limitations are.
Speaker 3 (28:46):
How they're getting better.
Speaker 5 (28:47):
You know, I like, I.
Speaker 8 (28:49):
Can see how Claude three point sevens on it, which
just came out two weeks ago, is meaningfully better than
Claude three point five New, which was the previous version.
And I can see like where the kiddies got better
and kind of how they're changed the system.
Speaker 5 (29:03):
A little bit.
Speaker 8 (29:04):
And I think that's really important because I don't think
I think there's just a general sense of there are
AI systems and we don't exactly know how they operate,
and some people are very over indexed on the November
twenty twenty two version. Some people just kind of read
the headlines and are like, oh, man, like everyone's going
to lose their job next week and stuff like that,
(29:24):
and so that is.
Speaker 3 (29:26):
One big part of it.
Speaker 8 (29:27):
I think another big part that is really unfortunate and
something I think we are going to have to.
Speaker 3 (29:32):
Deal with is that there is just a very negative public.
Speaker 8 (29:36):
Sentiment on AI generally, and I think this is honestly
really bad and it is going to boil up into
a bigger problem in the not so distant future where
if you just do big surveys of people who are
not plugged into this, who are not you know, programmers
that are trying to use Claude to help them write
(29:56):
code faster and things like that. People just are one
really worried about what it means for them in the future.
But also if you look at Sam Oltman recently posted.
Speaker 9 (30:06):
On on X kind of a story that they're they like,
fine tuned a recent model that wrote a very creative story,
And if you look at a lot of the feedback,
it's it's kind of people quote tweeting it and being like, man,
why would I ever read something written by.
Speaker 8 (30:25):
By a machine? I'm never going to do this. This horrible.
You know, if you go on like Blue Sky, you
will start seeing even more people that are kind of like,
we need to shut this all down.
Speaker 3 (30:35):
This is horrible.
Speaker 8 (30:37):
Again, some of this is well meaning and motivated, Like
I think, you know, Dean was talking about the Take
It Down Act, which focuses on non consensual intimate imagery
that is then deep faked, and that's something that like
me and my organization have worked on a lot, and
I think you know, there are people who see that
kind of stuff and they're like, Okay, this happened and it.
Speaker 3 (30:55):
Was really bad.
Speaker 8 (30:57):
I'm now worried about all the other things that could
happen and could also be really bad and so I
think there is a there is a constituency there of
people that just want their legislators to do something anything
about this thing that they feel very uneasy about. And
so that's where I think you get a lot of
this this action again. Now, then there's there's a separate
(31:18):
thing where I think legislators like Senator Scott Leader in
California are worried about, you know, more extreme risks that
are not really the ones that are in the public discourse,
and that there are This is like proper policymaking, where
you are thinking about things that might not already be covered,
that other people aren't talking about, that should be discussed more.
(31:41):
And so you know, I want to like lay out
those different proposals and kind of like show that they're
for a lot of the AIX style stuff.
Speaker 3 (31:49):
Or where you get like, you know, no no.
Speaker 8 (31:52):
Deep fikes at all and stuff like that. There is
a constituency and that is motivating. I do think for
some of the other issues, it then gets a little
questionable and people aren't tracking it as well.
Speaker 4 (32:02):
My really bad, just one quick, really bad dad joke
that I try to tell whenever people are asking about
the lack of AI literacy. I say, oh, are you
using Gemini And the answer is, oh, no, sorry, I'm
a libra.
Speaker 5 (32:17):
Uh it's my bad joke, I know, but some people
will appreciate.
Speaker 4 (32:22):
You can attribute it to me in your next stand
up act you wanted to jump in.
Speaker 6 (32:27):
Yeah, no, I would just I would maybe even turn
up the volume a little bit on what Sonny said,
in the sense that you know, I increasingly do think
that the possibility of labor market disruption in the relative
near term is fairly high. I don't know that. I
(32:51):
don't know that what that's going to look like is
companies mass firing people. I don't know that it's going
to look like that. But I think we're instead going
to see is building labor market problems by attrition. So
as people retire company or or leave the company for
their own reasons, people companies will just not replace them.
Speaker 3 (33:10):
They won't hire junior staff in particular.
Speaker 6 (33:13):
That's the I think what we're seeing right now is,
you know, there's something like open AI's Deep Research project,
which is an agent system, so not just a chatbot
that responds with whatever first comes to mind, and not
even just one that reasons before it answers, but one
that will actually reason and go look at, you know,
one hundred sources on the web and sort of think
(33:33):
about them and sort of go down rabbit holes on
the internet, just like a.
Speaker 3 (33:36):
Human research assistant would.
Speaker 6 (33:38):
It's not obvious to me why I would ever hire
a junior research analyst for myself, again, a human one.
And uh, I think that's going to be true in
software engineering, and I think there's probably other I think
it will be true in things like sales and marketing
and so.
Speaker 3 (33:57):
And I think it could also, by the way, be
true in the law.
Speaker 6 (33:59):
We saw the leading AI law firm, Harvey, which has
partnered with open Ai, released a legal workflow agent that
does junior level, sort of white shoe law firm tasks.
And yeah, I mean, I think we're going to start
(34:20):
to see a situation where unless you're a very very
very promising young person trying to enter a knowledge workfield,
you will increasingly struggle to.
Speaker 3 (34:30):
Just to get a job.
Speaker 6 (34:31):
And I think I think that's going to be a
big problem, right, Like I think that's going to create
I think we've seen absolutely nothing when it comes to
aggressive AI policy, because once that starts to be a
major consideration, there will be a demand to do something,
and I worry that that something is going to come
from a patchwork of states.
Speaker 8 (34:55):
Yes, yeah, just one thing on that is, this is
a real conversation that I've had with multiple people who
basically it's, oh.
Speaker 3 (35:08):
Are you are you? Are you hiring any interns?
Speaker 8 (35:10):
And the answer is, honestly, I just pay for one
pro at two hundred dollars a month, and it is
much cheaper than hiring an intern.
Speaker 3 (35:19):
It is much easier for me to figure this out.
Speaker 8 (35:22):
It's always there, and it is just probably more knowledgeable
than anyone that I could hire. And then then there
was deep research that came out and does the things
that Dean's talking about, and I think that is, you know,
that was the next step. But I think this was
already a conversation I'd had with multiple people when oe
pro was starting to get explored, and I totally agree
(35:42):
that it is just going to exponentially and get worse
over time in terms of labor displacement.
Speaker 4 (35:48):
And this is where I think states actually have a
pretty clear lane of intervention that I haven't seen a
lot of the seven hundred bills addressed, which is just
this future of work question of how are we monitoring,
for example, when those layoffs are occurring, As Dean stressed,
if we look at things like the Warren Act at
the federal level or the Mini Worn Acts at the
(36:11):
state level that address instances of mass layoffs, in a
lot of cases, we're just not going to see those
reports if we have that sort of incremental displacement, displacement
by AI, and that's going to undermine the state's ability
to help train the workforce for the jobs of the future. Right,
that's important information to have when it comes to addressing
(36:31):
K through twelve education, reforming K through twelve education, and
making sure that folks do have lanes of economic opportunity
and that we can actually reform retraining and upskilling programs
that to this point are about as effective as watching
a bunch of con Academy videos.
Speaker 5 (36:49):
So lots of room for improvement there.
Speaker 4 (36:51):
Before we jump off, I'll give each of you a
chance to just make an old claim. Right, what can
we expel from the remainder of twenty twenty five. Is
there some major breakthrough we're going to see from a
regulatory standpoint that listeners should have their eye on. Dean,
I'll start with you, because I'm in a position of cold.
Speaker 5 (37:13):
Calling, and you drew the unlucky number.
Speaker 3 (37:17):
Yeah, a breakthrough.
Speaker 6 (37:22):
I think here, here's what I'll say, and this maybe
is not that bold, but what I will say is
I think that before the year is out, we will
see a.
Speaker 3 (37:31):
Legislative framework emerge from the States that is genuinely new.
Speaker 6 (37:38):
And innovative at a structural level in various ways, and
that that maybe like comes at some of these problems
from a different angle.
Speaker 3 (37:50):
I think we'll see multiple of those, in fact, and
I for.
Speaker 6 (37:55):
One, am excited about that because I think right now
that too many of our policy proposals are either taking
stuff from Europe or just taking existing American legal concepts
off the shelf and kind of putting them into the
DVD player of AI and seeing what comes up. And
I just kind of think that that's I think we
(38:18):
could do with more imagination, and I think we'll start
to see it. Whether any of those legal frameworks are
actual laws or whether they become laws harder for me
to say, but I do think we will actually make
genuine progress this year, because I'm seeing enough stuff click
together just across the field that it just feels it
feels inevitable that it will happen.
Speaker 4 (38:41):
Well, Dean, we're going to have to have you back
on in twenty twenty six to make good on whether
you owe me a beer or ioe you one?
Speaker 5 (38:49):
Sonny, how about you any bold predictions.
Speaker 8 (38:52):
I'm actually going to go further and say that not
only will you see a new kind of regulatory framework
and way of thinking about this, but you will actually
see that pass into law in a meaningful state, and
that will help shape a lot of the policy conversation.
I think once you get that first success over the line,
it will inspire both people.
Speaker 3 (39:13):
That didn't like it to take the idea of regulation.
Speaker 8 (39:16):
More seriously, but also people who did like it that
how are we going to replicate this?
Speaker 3 (39:22):
Or how can we improve on this? What can we do?
Speaker 8 (39:24):
Maybe at the federal level, I think you're once that
thing passes, you are just going to release a cascade
of effects at both the state level but also the
federal level, where people start to think about this even
more seriously than they already have been so far.
Speaker 4 (39:37):
A sort of regulatory gold rush. So, Sonny, that sounds
like you raised the vet. So I think you owe
both Dean and I a beer. Depending on how things go,
but we'll have to leave it there. Thank you again
to Dean and Sunny for joining and Lyby, I'll kick it.
Speaker 5 (39:52):
Back to you.
Speaker 2 (39:54):
Awesome. Thanks Kevin, and thank you for a great conversation.
Thank you all for joining us today for sharing your
insights and opinions with our audience. It's really fantastic. If
anyone's interested in learning more about all of our programming
here at OURTP discussing the regulatory state and the American
way of life, please visit our website at regproject dot org.
(40:16):
That is our egproject dot org.
Speaker 3 (40:19):
Thank you.
Speaker 1 (40:26):
On behalf of the Federal Society's Regulatory Transparency Project. Thanks
for tuning in to the Fourth Branch podcast to catch
every new episode when it's released. You can subscribe on
Apple Podcasts, Google Play, and Speaker For lays from our TP,
please visit our website at Regproject dot org. That's our
egproject dot org.
Speaker 3 (40:54):
This has been a FEDSOC audio production.