Episode Transcript
Available transcripts are automatically generated. Complete accuracy is not guaranteed.
Speaker 1 (00:02):
Welcome to the Regulatory Transparency Projects Fourth Branch podcast series.
All expressions of opinion are those of the speaker.
Speaker 2 (00:18):
Hello, and welcome to the Federalist Society's Regulatory Transparency Projects
Fourth Branch podcast.
Speaker 3 (00:24):
Today we will be.
Speaker 2 (00:25):
Discussing the recent innovations and legal challenges to artificial intelligence.
My name is Elizabeth Dickinson and I am an assistant
director with a Regulatory Transparency project.
Speaker 3 (00:35):
As a reminder, all opinions.
Speaker 2 (00:36):
Expressed are those of the speakers and not of the
Federalist Society. In order to get right to the discussion,
all hand things software moderator who will introduce our experts
and the conversation. Kevin Fraser is an adjunct professor at
Delaware Law AI fellow at the Lawfair Institute, and Emerging
Technology scholar at Saint Thomas University College of Law, where
he previously served.
Speaker 4 (00:56):
As an assistant professor.
Speaker 3 (00:58):
And with that all him.
Speaker 5 (00:59):
Thanks all to you, Ken, Thank you very much, Elizabeth,
and thank you to Steve Schaeffer and the entire Regulatory
Transparency Project for the chance to host this important conversation.
It's my great honor to dive into AI on Valentine's
Day of all days. Don't tell my wife, but my
first love was and is thinking about how the law
(01:22):
can accelerate innovation rather than inhibit it. So it only
feels right to be talking about AI on this special
lovely day. It's my pleasure to be joined by Megan
maw Associate director of the Stanford Program in Law, Science
and Technology and the Stanford Center for Legal Informatics, and Drevakrishna,
visiting jurists at UCLA School of Law and a corporate
(01:44):
attorney in LA. First, let's set the stage. It's not
a question of whether the federal government will use AI,
but how and to what extent those uses run up
to legal barriers. As of late twenty twenty four, a
survey of thirty seven federal agency tallied more than seventeen
hundred different use cases of AI. Moving into the Trump administration,
(02:07):
it's clear that AI promises to feature even more prominently
in efforts to streamline and improve government services. Those recently
made headlines for using AI to look into wasteful spending
in the Department of Education, and this is likely just
the start of looking to AI to achieve the administration's goals.
President Trump's Executive Order on AI called for achieving AI dominance,
(02:30):
so we can expect that this is just the beginning
of much more AI used to come, and that dominance
will start at home with respect to the federal government
as integration into the government, though, faces several barriers. There's
popular skepticism on one hand, and on the other substantial
legal hurdles. Let's start with popular skepticism, Megan, I'll come
(02:52):
to you first. Why do you think folks have doubts
about government use of AI?
Speaker 6 (02:58):
And to what extent do you think.
Speaker 5 (02:59):
Those years are meritorious are perhaps grounded in a misunderstanding
of the technology.
Speaker 3 (03:06):
Yeah?
Speaker 7 (03:06):
Absolutely, Thanks again, Kevin, and thanks to the regulatory Transparency project.
Speaker 3 (03:10):
I feel really happy to be here.
Speaker 7 (03:12):
I think one thing that has been quite fascinating about
sort of this rise of generative AI, or the onset
of using this technology in all different facets, is that, well,
historically a lot of this technology was on the back end,
and so we've seen a lot of like older versions
with machine learning and deep learning, but because it wasn't
(03:33):
front and center, people were less worried about it. And
so I mean integrations into say grammarly and any of
the other tools that were familiar with it seemed to
be fine. But this kind of widespread sudden mistrust kind
of came up with headliner news about you know, in
the legal space, everyone's heard of mata versus a bianca,
and hallucination became a very big thing. But I think
(03:56):
it speaks to actually a deeper problem, which is we
have a serious double standard between human output versus machine output.
And I think it's because historically, in kind of these
white collar professions or you know, these tertiary services, we've
never really had to be confronted with the question of,
you know, what is quality in our practice? And you know,
(04:17):
speaking from the legal profession itself, I don't think that
any of us really have a consensus on what qualifies
as a good contract. A lot of partners will say
I know it when I see it. But and this
is some sort of gut reaction, but if it boils
down to a number of metrics or kind of different
ways of classifying you know, what's good and bad, we
(04:38):
really have no standard behind it or no kind of
joint you know, effort to work towards a standard necessarily.
And so because of this, I think this mistrust continues
to sort of boil, and I think there's also a
lot of obvious hesitation around bias and fairness and applications
(04:59):
of this technology kind of in more reckless fashions, just
because you know, there is an ignorance around some parts
of the technology. And of course, like as well, in
this current generation of generative AI, explainability is still quite
a complicated concept. Anthropic and other researchers have started to
(05:21):
dig into this, but still there isn't kind of enough scholarship,
and so I think it's an amalgamation again of all
of these things where if we don't have you know,
core metrics around the quality and the quality of human output,
then we have difficulty with benchmarking, you know what is
how the machines actually perform against humans. And then on
(05:43):
the other hand, of course, there's you know, this continuous
sort of fear around how do you tackle bias and
bias being that, particularly for generative AI, in some of
the kind of current use cases, we'll see that they
can't be generalized, and I think that that's also something
that is worrisome. Kind of what's interesting, if I may,
(06:05):
there's one more anecdote that I kind of always frequently
discuss and something that you know, I've shared with you
Kevin before, which is the story of Devon, and I
always think that that is such a great example of
this double standard that exists. This story that goes, or
this anecdote that goes, which was featured in x or Twitter,
was that this manager had deployed Devin, which is an
(06:29):
autonomous software engineer that was created by Cognition Labs. What
was interesting about is that the manager at first onboarded Devon,
not telling the rest of its team.
Speaker 3 (06:40):
That Devin.
Speaker 7 (06:43):
Is an autonomous software engineer. And then everyone kind of
obviously gives grace to the new guy, so to speak,
and they were filling in tasks, doing a bunch of work,
and they were asking questions on different websites to kind
of look out for their own answer, and then kind
of later that day, the software manager revealed that, hey,
(07:05):
this is an autonomous software engineer. How did it do?
And immediately the kind of sentiment turned. It was as if,
you know, everyone became suddenly immensely critical. While it didn't
do this, it didn't know how to do this in
the code, there were all these mistakes and it's just
really revealing, I think in general, that we can always
afford kind of this grace and almost seeing like new
(07:29):
onboarding as an opportunity to you know, there's still a
learning experience, but there's absolutely no learning curve when it.
Speaker 3 (07:38):
Comes to these machines. It has to be perfection or nothing.
Speaker 5 (07:43):
Yeah, and what I think is really telling about that
story is we've all seen maybe one of the most
famous churns about sentencing links with judges before and after
they've had a snack, and you want to get the
judge after lunch, right because there hungry or excuse me,
you want to get the judge after lunch because they're full,
(08:04):
they're happy, and they're not going to sentence you to
as long of a period. But don't find that hungry judge.
And the great thing is AI doesn't get hungry. And
we need to recognize that in many instances you actually
may be better off having AI making adjudication about your
social security benefits or whether or not you qualify for
(08:25):
a certain program. So Dreven now turning to you, why
do you think there's still this hesitation around adoption of
AI into government.
Speaker 8 (08:35):
Yeah, definitely, And first of course, thank you FELI Society.
Speaker 4 (08:39):
RTP. Kevin Megan for being on the call.
Speaker 8 (08:42):
I would just say that I think one thing we
have to really remember with AI is that the public
has been primed by our society to expect a technology
for decades.
Speaker 2 (08:52):
Right.
Speaker 4 (08:52):
We've also terminated two We've all read the sci fi.
Speaker 8 (08:55):
Right, this is very different than something like blockchain, which
is decentralized finance and nodes and all these things. All
of us have a vision of AI that's been given
to us through decades and decades of Hollywood, various sci
fi novels, et cetera. So I think there's a lot
of people who when they hear artificial intelligence, sky net
comes to mind immediately, right, Something sky net like that
(09:19):
is a very hard bias to overcome very quickly. And
I think we as a society are because we're not
getting sky net, we're getting lllms, right. And I think
that's something that we have to always keep in mind,
is that, unlike many other new technologies that begin purely theoretical,
many folks already have an idea of what the technology does,
(09:43):
which is honestly largely flawed. Putting that aside, I think
the other reason why there's a lot of doubts is
because AI is very much and it's like TMZ time period, right.
Speaker 4 (09:54):
It is currently this very very dynamic.
Speaker 8 (09:58):
Personality film news headline that Megan was talking about.
Speaker 4 (10:04):
It almost reminds me of that.
Speaker 8 (10:06):
Video Bill Gates when he was dancing at the Microsoft launch,
but that times one hundred, right. Just two or three
days ago, we have Sam Altman taking shots at Elon
in an interview in a very very like gen Z
millennial way, right. So I think, as someone who's also younger,
I understand the hesitancy when people are talking about implementing
(10:28):
technologies that are so recent that we have a bias
towards that are being made by these kind of mavericks
and using them in probably our most important government functions.
Speaker 4 (10:37):
That makes a lot of sense to me as why
we have the concerns.
Speaker 8 (10:40):
What I would say is I think a lot of
those concerns can be mitigated, and we'll get into that,
I'm sure, And I think that a lot of people
do have a misunderstanding of how AI actually fits into
this larger project we call it democracy, or we call
America or whatever we want to call it.
Speaker 4 (10:56):
Right As a corporate attorney, my other background I want.
Speaker 8 (10:59):
To bring in is that AI is obviously a very
profitable business. I think that sometimes people have a skepticism
about that. Right when Navidia has a three point four
trillion dollar market cap and is worth more than most countries,
I think a lot of people get up in arms
about whether we can trust for profit entities with things
that are tied to things like democracy, voting, etc. I
(11:22):
understand that, I think we can also get into that
as well. But for me from the average person, when
I talk to my friends or my partner or my
parents and I say, so, why are you skeptical about
using chat ept in the government in some way or
to you.
Speaker 4 (11:37):
Know, efficient make a DMB more efficient?
Speaker 8 (11:41):
It really stems down to a bias that was created
through lots of media, a deep misunderstanding what the technology
actually does. This kind of newer age skepticism of wealth generally,
and I think a fundamental thing is they are not
ingrained in the technology on a day to day basis,
so they are very confused on how it works, and
(12:04):
thus that instills a lot of fear.
Speaker 6 (12:06):
Yeah, it's hilarious.
Speaker 5 (12:08):
Sometimes when I talk to people about generative AI, I'll say, so,
tell me which model you're using? Are you using Claude
chat GPT Gemini and they'll say Gemini.
Speaker 6 (12:18):
No, I'm a virgo. I don't know what you're talking about.
Way off right.
Speaker 5 (12:23):
I'm like, no, no, no, no, no, let's okay, we
need to zoom out a little bit. But to your point, Drew,
I think so much of this is grounded in fear
of the unknown, and it takes something that's difficult within
us as humans to say, okay, rather than be skeptical.
Speaker 6 (12:39):
I'm going to experiment.
Speaker 5 (12:40):
But I think it's worth pointing out cavemen wouldn't have
gotten very far if I'm creating fire. Their first question
was how do we extinguish it? Right, we need to
think about it complete agree right. The question was wow,
I can't wait to cook things with this. This will
be delicious. And those who experimented with it are are
war bears, not the ones who were rapidly trying to
(13:02):
put it out. So I do want to highlight some
important efforts underway, particularly in Oklahoma, with respect to AI literacy, because,
as we've talked about, so much of these concerns, the
benefits we can realize from AI won't be achieved if
the public is just.
Speaker 6 (13:20):
Reading the TMZ equivalent of.
Speaker 5 (13:22):
AI fear mongering, and programs like in Oklahoma where they're
using partnering with Google to create an AI essentials program
for ten thousand residents.
Speaker 6 (13:32):
That's something everyone.
Speaker 5 (13:33):
Should be copying, and so I'm very optimistic that we'll
see hopefully AI literacy expand, but assuming we eventually address
some of these public barriers and confront that skepticism head on,
It's also worth pointing out that there are legal barriers
in the way of rapid AI adoption. Believe it or not,
(13:55):
most of our federal laws were not created with chat
GPT in mind. When we're thinking about issue spotting as
good lawyers, I want to encourage us to think both
of the crazier side of legal issues that might arise,
as well as kind of the more mundane issues that
we may need to pay attention to. So, Megan, I'd
(14:17):
love to start with you on just what some potential
hurdles may be, even if they seem potentially outlandish to
folks who aren't as steeped in this world.
Speaker 7 (14:27):
Yeah, I think one of the things that I've been
hearing kind of recurring is this fear around accountability, like
who is going to be you know, responsible if something
goes wrong, especially with an AI agent. And so you
imagine the sort of state of the art that's come
out from open AI recently was deep research, and so
(14:48):
it's right now capable of producing research papers at actually
a B grade, which to me, for my students, you know,
it's not actually that bad. And so if I'm imagining
them as you know, a policy uh a kind of
policy analysts or a policy intern that's going to be
working with me.
Speaker 3 (15:08):
Uh So, actually, just to backtrack for.
Speaker 7 (15:10):
Those you know that are not familiar with AI agents,
but in this kind of current space, it's just this
idea that it's software that's able to kind of interpret
a goal and then identify its own ways of achieving
that particular goal.
Speaker 3 (15:24):
In the case of deep research, it's.
Speaker 7 (15:25):
Sort of like this standalone kind of software capable of
going out and uh you pose a research question and
it's able to kind of produce a really comprehensive report.
And so now kind of this era of you know,
these tools that are just interpreting much more abstract goals
as opposed to just you know, individual tasks, you're sort
(15:47):
of in this space where, well, you know, who is
actually kind of guarding these AI agents and the work
products that they produce. And there's been a lot of
kind of scholarship and just gosh about you know, are
people actually increasing or decreasing their capabilities of being critical?
And I think that's why it kind of lends itself
(16:08):
to this like accountability question, where you know, if people
are sort of reducing their critical analytical abilities and kind
of increasing their trust towards these tools, because you know,
the initial fairy dust around how good these tools are
are actually is still there. Then what are we going
to do? You know, if you know something goes wrong
(16:32):
and you know, I'm always reminded of in the past
where you know, there's a tool back in twenty seventeen
that was really really popular in the medical space. It
was created by a company called Babylon Health, and it
was a consumer facing tool that was capable of diagnosing
diseases and especially rare diseases. And I thought that it
was quite interesting because they bragged to no end at
(16:55):
the time that they were capable of achieving ninety two
percent accuracy on diagnosis in comparison to medical doctors trained
at Harvard, trained at Oxford or whatever. Tier one university
with thirty years of experience, and they said something about,
you know, like the eighty five you know, eighty five
(17:15):
percent accuracy. I don't actually know how they make these
types of calculations, you know, that's one thing. But the
recurring question was, well, what happens with misdiagnosis of these tools?
With you know, a doctor, it is actually pretty common
to hear about a one in eight misdiagnosis and where
(17:36):
you get a second opinion? But where do you get
the second opinions of these AI agents? And so that's
why a lot of people have been floating, you know,
the concept of legal personhood. But I think, you know,
in our current state, I still think it's a little
bit far fetched because there are other types of safeguards,
(17:56):
guardrails that you can put in place. But that I
think is one perspective that people have been thinking about.
Is this purely accountability question?
Speaker 5 (18:06):
Yeah, And it's so important to point out too, because
the stakes for government use of AI agents can be
quite high. Right if you're having an AI agent or
any automated system make a determination about your social security benefits,
for instance, you may be quite angry if you get
a letter back Hey, your benefits have been denied the
AI told me so, Well, that's not very satisfactory, right,
(18:30):
And that doesn't have a whole lot of support in
notions traditional notions of due process. And so I think
this is one area where we're really going to have
to see a sort of reimagination of what due process
looks like.
Speaker 6 (18:44):
On mass adjudication by AI.
Speaker 5 (18:47):
That's a sticky wicket to figure out, but it's certainly
not one that current law necessarily addresses.
Speaker 6 (18:53):
So we've got these issues around AI.
Speaker 5 (18:56):
Agents obviously imposing some very big questions about accountability, druva,
throw some other law at us. We're issue spotting, right,
get all those points? What else are we're looking for?
Speaker 4 (19:07):
So I think Megan's point ties nice thing to mind?
Which is I think which is?
Speaker 8 (19:12):
I think we are still once again this wild West
of approach right, So off accountability right, there's a great
book I think it was called The like one hundred
Words that Made the Internet, which is about Section two.
Speaker 4 (19:23):
Thirty right, and how we really.
Speaker 8 (19:26):
Benefited from this early piece of legislation that allowed the
Internet of foster right, in terms of accountability and those questions,
we don't actually have that certainty yet there's a lot
of uncertainty. Right, We're trying to force the AI square
peg into the triangle hoole a lot of ways, right,
how do how does AI fit into tort law? How
(19:47):
does AI finto maritime law? How does AI? You know
the law of the horse? Right, all these kind of
different things are going on.
Speaker 4 (19:54):
I think that.
Speaker 8 (19:55):
Uncertainty just breaking that out further even plays into approach
of AI models. Right, we are still living in the
not so open AI versus open source debate right until
deep seek really kind of you know, as as Mark
and Jason said, it was this huge moment that that
(20:16):
is still viable question that the models themselves are discussing, Right,
I think I think Sam Altman on a Reddit ama,
which is crazy to think about, like three three weeks
ago or two weeks ago, said we were actually wrong.
Open source probably is right? Moving on which is which is?
Which is a huge statement to be made. So I think,
I think what a lot of lawyers, and I think
a lot of legal minded folks forget that the industry
(20:38):
itself is still tackling these very fundamental questions of how
do we get to where we want to go? And
as a result, the law cannot keep up.
Speaker 4 (20:47):
Right, And what I want to contrast that with is
what's going on the crypto industry right now right.
Speaker 8 (20:52):
So, for example, in cryptover last decade, we had this
very strong overenforcement priority strategy depending who you ask, like
an intentional anti cryptomafia basically going on at the SEC.
But honestly, if you talk to a lot of real
industry players at most of these large companies, a lot
of them agree that we do actually need good rules right.
Speaker 4 (21:15):
The regulation is what they want right.
Speaker 8 (21:17):
There aren't as many people saying we want no rules
wild WestEd.
Speaker 4 (21:21):
What they're saying is we want clear pathways to registration,
securitious registration, et cetera.
Speaker 8 (21:26):
I am not necessarily sure if AI is at that
place yet, where Like I think in some ways we are.
I think most people do think there should be some
ethical guidelines or we shouldn't just allow for Skynet to
actually be made like fair enough.
Speaker 4 (21:40):
But we're still having these.
Speaker 8 (21:41):
Very very very tricky questions based around AI uncertainty.
Speaker 4 (21:48):
I mean, even the questions about fair use and copyright.
Speaker 8 (21:50):
We're getting the Copyright Office basically punting so many times,
giving us all these different guidance points et cetera. And
we're just really figuring it out. So the question what
is one of the big hurdles. The big hurdles is
that this is a very fast moving target, and it's
a very uncertain target, right, and we're basically trying to
hit a bullet train with a bow and arrow.
Speaker 4 (22:12):
Right, It's very, very, very difficult.
Speaker 8 (22:16):
So my suggestion is, and you know what my research
has kind of you've been getting at, is it's useful to
understand that first and then see what legislation or rules
we need to address.
Speaker 4 (22:29):
Like specific harms for example. Right.
Speaker 8 (22:31):
So, for example, I think most people agree there should
be some level of anti pornographic Deep Fake Act addressing
type things. There's that, and then there's the laws that
are saying if you ever manipulate media, you will go
to prison for like twenty years, right. And I think
that's kind of the balance of striking, which is because
we don't know how to really we don't actually have
(22:55):
a firm understand of the underlying technology itself.
Speaker 5 (23:00):
To add on to that point, besides first specifying it
sounds like you just need to work on your archery skills,
right if you want.
Speaker 6 (23:06):
To hit that exactly, get out there start work on that.
Speaker 5 (23:11):
But relatively it seems like a huge issue here that
we haven't necessarily discussed. We talked earlier about AI literacy
being a problem for the general public. There's also an
AI literacy component for the federal government.
Speaker 6 (23:25):
Right.
Speaker 5 (23:25):
Unfortunately, when we think federal employee, we usually don't think, Ah, yes,
AIPHD Stanford trained.
Speaker 6 (23:33):
They are up and up on AI.
Speaker 5 (23:35):
They know all the machine learning, they can describe a
neural network.
Speaker 8 (23:39):
No.
Speaker 5 (23:39):
Right, if we go agency by agency, there's not a
ton of training that's been done on AI literacy to
combat things like automation bias, where we see folks just
default to whatever the AI algorithm told them to do, so.
Speaker 6 (23:54):
Huge learning curve there.
Speaker 5 (23:55):
I also just want to flag my particular issue spotting
area here would be laws like the Privacy Act. So
the Privacy Act passed in nineteen seventy four in response
to Watergate.
Speaker 6 (24:07):
Everyone's really wanting.
Speaker 5 (24:09):
To make sure there's more transparency and accountability in the
federal government. So in case folks haven't read the text
of the Privacy Act, in a second, it prohibits an
agency from disclosing any record which is contained in a
system of records by any means of communication to any
person or another agency. And just to stress a couple
(24:31):
of things here, record is broadly defined, so any piece
of personal personally identifiable information or PII qualifies as a record,
and courts have interpreted disclosure very broadly. So one issue
right now in the DOSE context, coming into the Department
of Education and saying, hey, let's look through all these
(24:52):
grants and try to identify where there may be waste.
Speaker 6 (24:56):
Good motive. Heck, yes, I think.
Speaker 5 (24:58):
Everyone would say, if we can identif fraud, if we
can identify waste, let's do it. So A plus on
that front, F minus. Does that comply with the privacy
acts right now under current interpretation? Probably not right if
you're disclosing PII that may be in those grants to
a third party AI system. It's not a slam dunk
(25:22):
case if you're DOSE right now to try to defend
that practice. So, updating some of these laws that have
really good intentions but just weren't motivated for or animated
by the reality of using AI toward better service streamline
government service is a big task for the federal government now.
(25:43):
Thinking about that, I had an interesting LinkedIn conversation which
is something you only say if you're not young anymore.
I'm on LinkedIn and I'm saying, you know, federal government,
federal government needs to lean into AI. This is a
good thing. Don't be scared. And somebody responds, why just
(26:04):
why should why should we even use AI in the
federal government? Why don't we just pretend that this revolution
isn't happening. They didn't say that part. They just left
why dropped the mic and then logged off. I'm assuming, Megan,
why should we be encouraging this use of AI? Why
can't the federal government just say, hey, we're going to
take a time out, We'll let open AI figure it out.
Speaker 6 (26:26):
And then tell us how to use this.
Speaker 5 (26:27):
Why should we encourage this sort of experimentation and adoption.
Speaker 3 (26:32):
Yeah, so, I mean, I think that's a great question.
Speaker 7 (26:34):
First, I think this is one of those technologies, as
you were saying, like literacy really matters here. I think
an important part is that a lot of this fear
comes from an ignorance around the technology. And in the past,
as I mentioned, you know, all this technology was hidden,
it was on the back end, but this is now
front and center. And of course, like as I mentioned,
(26:55):
with AI agents kind of you know, a lot of
tech companies in the Silicon Valley kind of declaring it
as the Year of Agents. And so we're kind of
faced with a reality that we're entering into the era
of human machine collaboration, whether we like it or not.
And I think one thing that's been fascinating sort of
historically what we see is there's there was a push
(27:17):
from sort of blue collar into white collar, and I
think we're at kind of an inflection point where what's
post white collar almost or what's kind of an entrance past.
Speaker 3 (27:29):
The tertiary services.
Speaker 7 (27:31):
I think the other thing is, as we mentioned, is
we often have these really kind of broad sweeping descriptions
of jobs or even tasks, but we don't realize that
there's a lot of these subtests that these machines are
actually far better than we are at doing them. And
you know, one thing for sure is information retrieval, that
(27:52):
needle in the haystack, you know, task is actually really
really well done in tros and tros of volumes. And
so I think, for example, when it comes to thinking
about how we should be treating this technology, I think
if we don't use it, we'd be it'd be a
(28:15):
serious miss. And I think not only would it be
a miss that those who are actually thinking about integrating
and in the private sector, this is happening now and
more than ever. And so just as a kind of
single example, this is one that was often brought up,
(28:35):
some people were thinking about, Oh, you know, in the
kind of patents, in the space of patent law, a
lot of people wanted to file patents, and you know,
they think that this technology is really good at drafting
sort of these standard patent filings. If you can imagine
sort of the USPTO facing all of a sudden with
an avalanche of patent filings and they are not kind
(28:58):
of integrating any of this technology on their end, you
are really only increasing that backlog that already exists. And
so I think this kind of like small use case
can be applied more broadly that because it's already happening
in other sectors, it's only fair that the government should
be thinking about ways of integrating.
Speaker 5 (29:19):
Yeah, it's wild to think of if the federal government
had seen what happened at Kitty Hawk been like cool
flight was discovered. We're good with cars, we like horses,
We're just kind of stand on the ground. You all,
you all go run with that. We're just gonna stick
on the land. It seems a little safer here, and
we'll just miss this revolution. So, Drewvo, what are some
(29:39):
some rationales for why you are kind of pro adoption
in this case.
Speaker 8 (29:44):
So I'm definitely a techno optimist, and I'm you know,
I'm one of those guys that is like, AI is
our version of fire, right, like it is species changing
in the long term.
Speaker 4 (29:53):
So for me, it's a bit of a.
Speaker 8 (29:56):
Hard question to answer. That's such a it's one that
I just assume, right. But when I was thinking about
this question, I think there's a few different points. One,
I think there's an there's an inevitability question, which is
it is inevitable that this technology is already so integrated everywhere.
And I'm not even speaking about high level use cases.
I've just been on social media, like even on things
(30:17):
like LinkedIn. There are AI bots everywhere, there are people
making AI content. There's like one or two influencers who
are like, I haven't made a real piase of content
in months. This is actually like an AI agent making
all this stuff for me. So I think in terms
of things like democracy and how on a very ground
level people interact with each other and have opinions inform them.
(30:39):
We as a society have already signed off on AI
being a big part of that, and the government has
some type of role duty, et cetera to understand that.
So in short, just given how how often it's being
used already, I think it's very clearly government should also
(30:59):
understand the technology.
Speaker 4 (31:00):
Is changing its populaces like identity in any ways. Right,
That's one.
Speaker 8 (31:05):
I think that The second thing is I do actually
believe that it has a lot of benefits for you know,
depending on the US, making democracy more accessible and on
the other side, you know, drain the swamp. Right, that
whole side of things, the whole doage mission of you know,
we can use technology to.
Speaker 4 (31:24):
Make things less bureaucratic.
Speaker 8 (31:26):
Right, I think I'm not sure how signed off I
am on various methods of doing that, and that's kind
of the hard part, Like Kevin, like you said, the
Privacy Act. Right, we all agree that the government should
be efficient. Not all of us agree that we should
have people like sifting through our loan data, fair enough,
but I do think there is something there that I
think a lot of younger folks especially understand the DMV
(31:49):
can work a lot better if we maybe use some
blockchain and AI to keep the records clear, right, fair enough.
Speaker 4 (31:54):
So that's an our thing.
Speaker 8 (31:56):
I think the third thing, which is I think where
President Trump and his administration really coming into play, is
like the national security aspect and this idea of like
American tech dominance as not just like techno nationalism, but
a really important part of a global American agenda. I mean,
you know the little tech agenda, you know, A sixteen,
(32:17):
all those guys, they're all bought in. I think you're
even seeing that the whole deep sea controversy, right, We're
seeing deep seek that's being this model that was one.
Speaker 4 (32:25):
There's this whole.
Speaker 8 (32:26):
Question about IP theft whatever, but it's obviously almost like
an an AI arms race. How could this hedge fund
hedge fund manager guy, using like a tenth of the
resources suddenly make a model that we're putting like millions
and billions of dollars into, right, And it's becoming a
bit similar to like Oppenheimer's style discussions over how can
(32:46):
we get faster, stronger, better, et cetera with the least
amount of resources. Once again, I think I'm bought into
that insofar as I do think, just given how important
AI is, the national security threats are actually real and
the government, which honestly has done a great job as
getting ahead of that and saying that saying the QUI
(33:07):
part out loud, which is, we want to be the
dominant force here, and that is actually very important, like
just a mere safety point of view, like let's put
aside all the economics of safety. So I am bought
into all of those parts. I think each of those
kind of plays a large part. But yeah, I do
agree with you, Kevin that obviously the government should be
(33:29):
adopting AI, be knowledge about AI, fostering it, etc.
Speaker 6 (33:33):
I just think it's so important.
Speaker 5 (33:34):
And you both of you were pointing this out to
consider the counterfactual, And I don't want to live in
the counterfactual where all of our adversaries are leaning into
AI to develop the most sophisticated cyber attacks, for example,
and we said, no, we're going to slow walk AI
and not develop the responsive defensive measures. I also don't
(33:56):
want to live in the counterfactual where, for example, the
US Army Corps of Engineers isn't using AI to accurately
predict flooding right. I much would much rather And I'm
guessing millions of Americans who are affected by hurricanes this
year or polar vortexes right now would love if Noah,
for example, could give them a week's notice or two
(34:18):
weeks notice of potential disasters rather than just waiting for
their local weather person to say, oh right, watch out, folks,
you might want to move. Similarly, I am very much
a fan of the Social Security Administration using AI to
proactively identify individuals who may be eligible for benefits, right,
(34:40):
rather than just forcing folks through endless bureaucratic loopholes. If
you qualify for benefits, let's give them to you, right, like,
let's support Americans. So that's my push is when folks
can test the use of AI, I think it's so
valuable to say, hey, look at this very specific example
and there's se or excuse me, there's seventeen hundred of
(35:02):
them that we can pull from to say, tell me
if you want to live in that counterfactual. And if
that's the case, I don't know, maybe move to like
Nova Scotia. I'm not sure if a have.
Speaker 4 (35:13):
An issue, right, no agreement, but you know, if you
want to be free from AI and don't want those benefits.
Speaker 5 (35:20):
Don't stay here, I don't know, find it different, Go
be a luddite somewhere else. But okay, with all that said,
I want to end our cheery Valentine's daying conversation on
an aspirational note, or perhaps a fearful one that we
can discuss later. So, Megan, what's something that's exciting you
(35:41):
or scaring you at this moment that you'd like to
leave listeners with.
Speaker 7 (35:46):
So I want to end on an optimistic note, and
I will say that I think something you mentioned in
the opening is the is Joe trying to find ways
of cutting sort of wasteful spending quote unquote in the
ministry Department of Education. To me, actually, education is the
biggest focal point that we need to think about going forward.
(36:08):
You sort of bring home really well and as well,
like Dreuva, you also mentioned this, I think literacy is
everything here in understanding the technology, or at least making
the case of experimenting to get to know better how
these tools work. But on the other side of it
is how do we think about the future of educating
our coming generations? Kind of my current mission is all
(36:32):
about how do you integrate generative AI into the legal
education space, how do you prepare the next generation of lawyers?
And I think especially for professional services, this is kind
of like the first time where there's a technology that
is directly targeting kind of the area of professional services.
And so for us, I think rather than thinking about
(36:52):
ways of cutting sort of education spreading, I think we
should be increasing how we're thinking about, but preparing for
different methods of education. I think for the first time,
you know there's going to be scalable, personalized tutoring that's
really really exciting, and thinking about all these different ways
of encasing experiential knowledge. I think that's sort of the
(37:15):
future that I want to see. So if I think
about kind of the use of AI agents and the
promise of tomorrow, it's thinking about how all these agents
are going to be really impactful for making sure that
our next generation is better, stronger, faster, and more agile
than we ever would.
Speaker 5 (37:34):
I like this optimism and I think this is extremely
important too. When you think about an American first agenda
and AI dominance, it's not enough if we just lead
from developing the.
Speaker 6 (37:47):
Best frontier models.
Speaker 5 (37:49):
If no Americans know how to use them, how to
use it, right, So we are far more resilient, far
more likely to endure and thrive in an AI world
if we lean into that agenda. So, Drew, how about
you hopefully optimism as well. But if you've got to
go to the negative route, you do you.
Speaker 4 (38:06):
I'll give you one of those to be fair.
Speaker 8 (38:08):
So I think what I'm excited about is this idea
of we are in an era of immense growth, right.
I think we're in an era now where the government
and the industry is bought in. And I completely agree
with Megan's comments that we need the literacy part as well,
but at the very least, we now have two out
(38:28):
of the three people bought in. So I think we're
going to start getting really unbelievable results out of that.
Speaker 4 (38:36):
Competition and that synergy.
Speaker 8 (38:38):
I mean just with Deepseak itself, right, that's unbelievable. And
we're like just at the beginning of base camp before
you even go to the summit, right, So I'm wondering
in four or five years, what's going to come out
of this synergy, these partnerships, etc.
Speaker 4 (38:55):
I think this.
Speaker 8 (38:56):
I think that's also something that I personally feel attorney
because all of us are lawyers of law school, et cetera.
Is making things more efficient and accessible, especially in these
white collar corporate arenas. There's been a lot of good
work coming out how AI can be using. I think, Kevin,
you actually talk about this, how AI can be used
to make courts systems more accessible, right, just you know,
(39:20):
things like civil claims, et cetera, taking and removing dockets,
using them these different ways to actually analyze caseloads, making
allocking resources. I'm very excited about that because that is
a very immediate and practical impact.
Speaker 4 (39:35):
I think the.
Speaker 8 (39:36):
Main concern I have as someone who is millennial who's
kind of gone through the social media window platform whole thing,
is how AI effects are critical thinking. I think there's
a lot of good research coming out that talks about
AI is a good It should make you more efficient,
(39:56):
but it should not be a substitute. And my biggest
concern this is partly by a obviously younger generations. All in
my generation is people using these tools to replace critical thinking. Right,
And we say with social media already, right, we've gone
from sixty minutes to like sixty seconds to fifteen seconds
in terms of getting news. Right, there's a real concern
on my end about what happens if that is now
(40:18):
affecting things like complex legal systems. If we are using
like something like a chat chept to analyze very complex
first amment issues or gun issues or climate change or whatever,
it just becomes this thing of we still need to
have people who are knowledgeable behind the scenes to fact check.
Speaker 4 (40:38):
And I sometimes think people in the AI space forget that.
Speaker 8 (40:41):
Right, there's a million and a half memes photos of
people asking chat chepet funny things and getting the wrong results.
It would be terrifying if that was actually used like
sent in someone right. And then of course on the
back half of that is once again all the national
security side things of using models from advers AOL countries
that my sense or results, how that changes truth.
Speaker 4 (41:03):
Et cetera.
Speaker 8 (41:04):
So I think there's extremely high risk, extremely high reward.
As like a VC lawyer, that makes sense to me,
But I think that's something that I'm both optimistic and
caustions about.
Speaker 6 (41:16):
To piggyback on that.
Speaker 5 (41:17):
I would encourage, especially any parents listening to this, to
check out the character dot AI litigation right now about
exactly kids using AI as companions and doing some really
sketchy things with their AI companion that I'm sure most
parents would not approve of. That's a scary world that
I don't think any of us want to explore. Everyone
(41:40):
here has probably seen her. No one wants that her
reality to come about where you know, instead of people
trying to marry the golden gate bridge, they're trying to
marry their chatbot. You know, scary time. So let's not
go there. I'm going to end on on three quick notes. First,
I'm going to scare the lawyers. Remember that the ABA
model rules and most likely your state rules require tech competence,
(42:04):
So you can't hide from AI. That is not how
you can survive as a lawyer. You have to engage
with this technology. Number two, really want to celebrate cal
State University, the cal State system. They are adopting chat
GPT across their entire system, making it accessible to their students,
their faculty, and their staff, and encouraging its use. That's
(42:27):
what we should see from, especially from every public university system.
Speaker 6 (42:31):
If we're trying to train the.
Speaker 5 (42:32):
Next generation of Americans who are going to lead us
to a greater, more prosperous future, we have to lean
into it. And then finally, do you want to applaud
Case Western University School of Law the first school to
require one LS to get an AI certificate.
Speaker 4 (42:48):
Wow?
Speaker 6 (42:49):
You should see that across all the.
Speaker 5 (42:51):
Schools, right, Yeah, I know you would have thought Harvard, NYU, Stanford,
Case Western. So kudos to Case Western and kudos again
to the Regulatory Transparency Project for hosting this conversation. Drewva Meghan,
thanks so much for joining. This was a hood and
a half and really enjoyable.
Speaker 7 (43:09):
Yeah, thanks so much, Kevin, Thanks project.
Speaker 3 (43:14):
Thanks, what a great conversation.
Speaker 2 (43:17):
Thank you all so much for listening today, and thank you,
of course for our experts for sharing their insight and opinions.
If you're interested in learning more about our programming discussing
the regulatory state and the American way of life, please
visit our website at redproject dot org.
Speaker 3 (43:31):
That is red Project dot org.
Speaker 2 (43:33):
Thank you.
Speaker 1 (43:40):
On behalf of the Federal Society's Regulatory Transparency Project. Thanks
for tuning in to the Fourth Branch podcast to catch
every new episode when it's released. You can subscribe on
Apple Podcasts, Google Play, and Speaker for lays from our TP.
Please visit our website at Regproject dot org. That's our
egproject dot org.
Speaker 3 (44:08):
This has been a fed SoC audio production YAHM