All Episodes

June 12, 2025 • 62 mins
Join us Monday, June 9th, at 12:00pm EST for a timely discussion examining how artificial intelligence is fundamentally upending existing data protection laws and reshaping the debate over privacy protections.
The rise of AI has created a tension between unlocking AI’s transformative potential and protecting personal data. As AI systems require vast amounts of data to function effectively, traditional privacy frameworks face unprecedented challenges. Our panel of experts will address emerging issues in data privacy such as how AI is challenging conventional data privacy best practices, state-level privacy regulations and their impact on AI innovation, sectoral challenges in healthcare, education, and finance, and what a modern privacy framework designed for the AI era might look like.
Featuring:

Pam Dixon- Founder & Executive Director, World Privacy Forum
Kevin Frazier- AI Innovation and Law Fellow, University of Texas School of Law
Jennifer Huddleston- Senior Fellow, Technology Policy, Cato Institute
[Moderator] Ashley Baker- Executive Director, Committee for Justice
Mark as Played
Transcript

Episode Transcript

Available transcripts are automatically generated. Complete accuracy is not guaranteed.
Speaker 1 (00:00):
Regulation after regulation. There are dated regulations that need to.

Speaker 2 (00:04):
Be changed one hundred and eighty five thousand pages.

Speaker 3 (00:08):
Our public accountability and transparency.

Speaker 4 (00:11):
There will be no public supports.

Speaker 2 (00:13):
It's really the best we can do.

Speaker 3 (00:14):
There's a regulation that doesn't make any sense.

Speaker 5 (00:16):
Why do you keep you.

Speaker 4 (00:17):
Know who wrote the regulatory laws you must comply with.

Speaker 1 (00:20):
Welcome to the Regulatory Transparency Project's fourth Branch podcast series.
All expressions of opinion are those of the speaker.

Speaker 6 (00:33):
Good afternoon, everyone, and welcome to today's Regulatory Transparency Project webinar
titled does Privacy Exist in an AI World? This is
part one of the discussion rethinking data protection. We're so
glad that you're able to join us today. My name
is Liby Dickinson, and I am the assistant director with
the Federalist Society's Regulatory Transparency Project. As a reminder, all

(00:53):
opinions expressed are those of the speakers and not of
the Federalist Society. We're honored to be joined today by
a fantastic parent all of legal experts addressing the questions
surrounding AI policy and the effect on consumer data privacy.
After the panel discussion, there will be a short period
designated for audience questions. We ask that you submit them
via Zoom's Q and a feature not the chat, and

(01:14):
that they are pertinent to the topic at hand. In
order to get right to the discussion, I'll start by
briefly introducing each panelist and then hand things over to
our moderator to start the conversation. Our panel today includes
Pam Dixon, the founder and executive director of the World
Privacy Forum, Kevin Frasier, AI Innovation and Law Fellow at
the University of Texas School of Law, and Jennifer Huddleson,

(01:36):
Senior Fellow of Technology Policy at the Cato Institute. Our
moderator today is Ashley Baker, Executive director with the Committee
for Justice. You can find out more about today's moderator
and panelists at fedsock dot org. That is fedsock dot org.
With that, I will hand things up to Ashley to
get us started. Ashley, thank you so much for joining
us today.

Speaker 4 (01:57):
Thank you Lebya, and thank you for hosting. That said,
it is a very timely topic.

Speaker 5 (02:00):
We couldn't have a tech policy discussion without AI this year,
it seems, so I'm going to start off the panel
with kind of just a broad general question for the panelist.

Speaker 4 (02:11):
It's been a couple of minutes on is it what
do you.

Speaker 5 (02:14):
Kind of see as being the state of affairs right
now in AI and privacy? I know that's a very
broadty thousand foot question, but what exactly are the issues
that you're focusing on so we can each kind of
narrow it down a little bit.

Speaker 2 (02:28):
Yeah, happy to jump in here first, Ashley, Thanks so
much for moderating Levy. Thank you so much to you
and the Regulatory Transparency Project for hosting. It's great to
be included among some STEAM panelists, including Pam and Jennifer,
So thanks for this opportunity. From my advantage point here
in Austin, I think that we are in the equivalent

(02:49):
of nineteen oh eight when it comes to AI. So
nineteen oh eight was when the Model T was first introduced,
and if I were to ask you who's going to
win in a race, someone driving a Model T or
someone riding a horse, the answer would probably be the horse.
And the reason why is we didn't have the rules,

(03:11):
we didn't have the norms, we didn't have the know
how to make sure we were using cars to their
full advantage. And as a result of that legal infrastructure,
social infrastructure, and just cultural surroundings. We weren't maximizing the
benefits of that technology because we were relying on the
roads and the norms of a bygone era. Well obviously,

(03:33):
fast forward now and I think most folks would say
they would opt for their cyber truck or their Prius
or whatever car over that horse because we built that
infrastructure out in the same way. I think when you
look at the state of AI in twenty twenty five,
we still haven't built the roads. We still haven't built

(03:54):
the street lights and the stop signs for figuring out
how best to use this technology. Particularly true when it
comes to privacy law. All the things we know about
how AI works and how AI works well really kind
of conflicts with the best practices when it comes to
thinking about privacy law. The best practices that were developed

(04:16):
in the nineteen seventies are things like limiting data collection,
things like limiting data sharing, things like making sure we
have specified use cases before the data is collected and
making sure we delete that information as soon as possible.
And yet, for AI to do well, and for AI
to do well in very specific context, we need as

(04:37):
much data as possible. We need to hold on to
that data for as long as possible, and we need
to continue to learn from that data on an ongoing basis.
And so I think a lot of folks find themselves
logging on to chat GPT, Gemini or Claude and thinking, huh,
you know this is great that I can write a
helpful email and bust out this reference letter in two

(05:02):
minutes instead of two hours. Yeah, but is this really
transforming my life? And I think the answer is no.
We haven't seen those kind of more transformative use cases
of AI, and I think a big reason for that
is the fact that we haven't updated our laws to
account for that, and in particular, we haven't updated our
privacy laws to account for that. So I'm very much

(05:24):
looking forward to this conversation and hopefully conversations on an
ongoing basis that sparked creativity, so that we can begin
to change our approach to privacy and data collection to
unleash the more positive, transformative and public facing uses of
AI that we haven't seen yet.

Speaker 3 (05:43):
See Pam, Hi, thanks so much, Kevin. I love your comments,
and first off, the thank you for your kind invitation
to bring me on to talk with all of you
about this. I really appreciate it. It's great to see
everyone here. So I just really want to follow on
your comments Kevin, and you know, I agree completely that

(06:06):
we don't have infrastructure yet. You know, AI is not
a new technology. It's been around for many, many decades now,
and there's a lot we do understand about it. But
as we're getting to more advanced forms of AI, there
are changes. I actually co authored a large report about

(06:27):
this topic parts of this topic in twenty twenty three
with katek Our, Deputy Director. It's called Whiskey Analysis. It's
all about AI governance tools, and our basic argument there
was well, in regards to privacy, what is behind us
is a huge history of privacy beginning in the nineteen seventies,

(06:49):
where we had decades of time to adjust to the technologies.
But AI is really looking like it's a different kind
of an animal. It's a bit of a z in
that we're not quite sure what privacy is going to
completely look like as AI gets its infrastructure to developed.
So to that point, I think something very important to

(07:11):
consider is what I call protocol. I've spent a lot
of time working on principles I was part of the
committee that created and helped draft and work through the
OCDAI practices that are now really normative, and those were
begun in twenty seventeen and then finalized in twenty nineteen,

(07:35):
updated in twenty twenty four. But the thing is is
that we're principles aren't enough. Law isn't going to touch
all the things that we're talking about here. And I
like the CAR analogy. I think it works. I'm going
to add an analogy here if I may. For me,
the way I think about this is really at what

(07:56):
I call a protocol level. And so if you recall
when email was.

Speaker 2 (08:00):
Just starting, I do.

Speaker 3 (08:04):
There's a protocol called SMTP where you have to use
SMS or the systems themselves, the large systems or global
systems have to use SMTP or similar protocols so that
the email can talk to each other, very similar to
HTTP and HTTPS on what used to be called the

(08:25):
Worldwide Web, now we just call online. So you have
to have these very deep technical structural protocols so that
systems can talk to each other. And that is what
is in process with AI. We do have some AI
protocols now and they're being built right now. And I

(08:46):
think this is a really important thing to think about
because this is actually where privacy is going to live.
And you know, laws and regulations may follow, but it's
the protocols that really are going to create a lot,
lot of the new practices that will follow from how
we learn about AI and how AI ends up interacting

(09:09):
with systems and data and et cetera. So I'll leave
it there and then join in the conversation as it develops.

Speaker 4 (09:16):
Thank you.

Speaker 7 (09:18):
I went to echo my fellow panelist and thanking the
Federalist Society for a Regulatory Transparency Project for hosting this event.
Thank you Ashley for moderating, and thank you to my
co panelist for some excellent remarks.

Speaker 2 (09:32):
Already.

Speaker 7 (09:33):
I took a couple of notes that I'm going to
work into to what I'm going to respond to this
question about that, I think you both already highlighted some
excellent points. But before we do that, I do think
it's important when we talk about data privacy to define
more specifically what we are talking about, because privacy can

(09:53):
be a very broad term. We talk about privacy as
it relates to citizens interactions with the government. We talk
about privacy as it relates to consumers, and that can
be either individual consumers or kind of business to business consumers.
As it relates to privacy between many of the entities

(10:13):
that we enjoy the products and services of We talk
about this both online and offline, and then oftentimes we
see things like security or data where each get thrown
in that. For today's conversation, I think Kevin and Pamin
minds all are planning to focus on that kind of
second element or we're really talking about that consumer privacy interaction.
How does AI interact with some of the ways that

(10:35):
we as consumers or that businesses to businesses may be
thinking about the data that they're using, the data that
they're obtaining, what consent means like, and things like that.
Kevin mentioned that in many ways, this is like nineteen
oh eight and the first Model T has rolled into town.
I'm actually going to disagree a little bit there. I
think this is more like in nineteen ninety five or

(10:57):
nineteen ninety six moment where the Internet has gained consumer popularity.
AI has been around for quite a significant period of time.
In fact, we've all actually been interacting with artificial intelligence
in many ways for many years before things like generator
of AI like chat GPT. This kind of hugely forward

(11:18):
in consumer products. Integrating AI really became a more part
of our daily lives just a couple of years ago.
But AI itself isn't necessarily as novel as it's sometimes
made out to be. And while it's a very disruptive technology,
I think it's important to remember that our existing laws
did not go away, and that can be both good

(11:40):
and bad. So to kind of build off of the
nineteen oh eight and the first model t rolls into
town example, you know, we probably did already have some
stop signs or some rules of the roads that were
used to horse and carriage type of laws that could
adapt to that new technology. But we also saw that
there were some problematic laws both in attempts to try

(12:02):
and stop the technology. If you ever want an interesting
read on this kind of interaction, I highly recommend reading
about red flag laws, where people literally had to run
in front of cars so that they wouldn't scare the horses.
But in many cases, the laws were able to adapt
to this new technology without further intervention. What's somewhat unique

(12:25):
with AI is that there are laws that may be
able to provide guidance, but there are also laws that
could be disruptive in negative ways, where we could see
things that like our data privacy norms being challenged, or
data privacy laws that can't adapt to new technology that
could actually be more privacy sensitive at times. And this

(12:46):
is because law is typically static while innovation is dynamic.
And we've seen this with things like the GDPR in Europe.
When certain AI products initially tried to launch, there were
a lot of concerns not around specifically were these privacy
sensitive enough, but were they able to comply with the
gdpr's specific requirements. So in many cases, when we're talking

(13:09):
about AI and privacy, I think it's forcing us to
go back and ask some of those foundational questions in
data privacy, why do we value privacy, how do we
value privacy versus other rights at times? What do we
do when these things come into friction, and how do
we create a situation where we're encouraging the good benefits

(13:30):
of data usage, some of the amazing things that AI
is going to be able to do, even with things
that might be considered particularly sensitive like financial or medical data,
well at the same time ensuring that there is a
response that there is some form of recourse for consumers
if they are harmed by malicious uses of data they
if something does go wrong, what are their abilities to

(13:53):
get some form of recourse in that situation, And how
can we make sure that consumers are educated in a
way where they're able to make actually meaningful, consent based
decisions based on their individual privacy preferences. And I know
there are a lot more things that I want us
to talk about with what we're seeing going on at
various state levels as well as at the international level,

(14:15):
but I think that kind of gives an overview of
how I've been thinking about this.

Speaker 5 (14:20):
Thank you, Jennifer, and thank you for defining what we
mean by privacy here and that we really need are
focusing on data governance, and those are two very different,
very different things. We're not focusing on government to privacy
or you know, how one feels about one's privacy rights personally.
And I'll ask the next channelist actually to define AI
while we're at it.

Speaker 4 (14:41):
But it's moving on from that.

Speaker 5 (14:43):
I think it makes sense here to have a greater
discussion about like the current law as like Jeffer said,
sometimes they don't apply very well now and it's kind
of for one of two reasons, is kind of you know,
the problem trying to fit us.

Speaker 4 (14:53):
We're pegging around hole.

Speaker 5 (14:54):
They just don't work for for AI, or there are
concerns about it tendering the developed and of the technology,
and like Kevin said, making it you know, useless for
practical purposes. So Kevin, I'll turn it over to you
next to first talk about what AI specifically we're talking
about whether it is just algorithms or if it's general
of AI or AGI, which you know seems to define consistently,

(15:17):
but I think what generatively we have a lot more
to go with here and ask you like, like, so
our insisting privacy law framework or I would say framework
because we have multiple privacy laws, does it help or
hinder AI development? And does it really still apply today?

Speaker 2 (15:34):
Yeah, So in terms of defining AI, I don't think
I have enough time to run through all the manifold
different definitions that could be offered up here. Typically, when
folks are talking about AI in a policy.

Speaker 4 (15:46):
Are not talking about yes.

Speaker 2 (15:48):
Yes, Typically when folks are talking about AI in a
policy making context, I think there are a couple key
buckets that come up. One is commonly referred to as
recommendation algorithms or engage algorithms. Here you're thinking of going
on to Netflix, and it's telling you the next show
that you should watch, and that's as a result of

(16:08):
both data that you've put the shows you've watched previously,
as well as sort of inferences that that algorithm is
gleaning from your interactions and then recommending that next true
crime series or reality show for you. In the same
way we see recommendation algorithms on social media platforms that
are likewise using user generated information as well as other

(16:34):
observations about your behavior to recommend certain content. Here, I
think we're focused instead mainly on generative AI. This is
the sort of new, but to Jennifer's point, somewhat new.
Depending on who you ask. Some people will point to
twenty seventeen as the real emergence of generative AI, some
folks will go even further, and some people will contest

(16:57):
every day in between. But generative AI is more than
just applying an algorithm to data that has been entered
into a system. It's about creating outputs from that data,
and in many ways it's just a formal statistical process
of looking at all of this training data going through

(17:19):
that training process to say in which instances am I
generating an output that I think aligns with how I've
been trained or what goals have been set by my
developer for what constitutes a good output. And so in
this way, the unique privacy aspects come from the fact
that we need tons and tons of data going into

(17:40):
these models to train them to output information that is
actually useful, that's actually valuable to its specific use case.
There's a lot of other models we could be talking
about new things that are popping up our world. Models
for example, that are particularly important for autonomous vehicles. There
are specific multimodal models that are more focused on visual

(18:04):
outputs for example. Here, though, I think a lot of
our focus is on generative AI and specifically large language models.
And so with that in mind, it's important to point
out that a lot of our data laws, a lot
of our privacy norms have evolved from the fair Information
Practice principles which were established in the nineteen seventies, and so,

(18:27):
as I hinted at back in my introduction, a lot
of those fair Information Practice principles are fips are grounded
in conceptions of privacy that just don't jive with AI.
So this may be things like being able to delete information,
being able to access information, and really keeping information in

(18:48):
a highly siloed bucket. So when we look at our
federal privacy laws, we see laws like FURPA. FURPA is
protecting the student data that's collected in all of our
manifold schools across the country. We see laws like HIPPA
that pertain to health information. We see laws like the

(19:08):
GLPA pertaining to financial information, and the Fair Credit Reporting
Act also dealing with a lot of financial information. And
so all of these laws developed out of bespoke concerns
about that specific niche of how data is being collected
and how it's being used. Well, the really interesting part

(19:28):
about AI is that it works best at spotting patterns
that humans otherwise couldn't detect. It works best at diving
into nuance and exploring new concepts that otherwise would have
gone unexplored if it didn't have access to troves and
troves of data. And yet, so long as we have

(19:49):
laws like FURPA, HIPPA, the GOLPA, I could go on
the Privacy Act of nineteen seventy four that are grounded
in a sectoral approach that are grounded in a siloed
approach to data collection, then we're going to struggle with
seeing the full benefits of AI apply. And that really
came to a head in some of the conversations about

(20:10):
how DOGE, for example, was using AI to assist with
its efforts to make government more effective and efficient. Where
if we see, for example, that you're not able to
access the whole span of data that the federal government
government may have on different agency budgets, for example, well
you may be missing the sort of interdependencies and patterns

(20:35):
that could better identify ways to save money, better identify
ways to help citizens access government services. All of these
things are partially blocked by the fact that we have
data that exists in silos. Well, of course, the reason
we have data in silos isn't only because of those

(20:55):
privacy concerns and data protection norms, but there are also
considerations like cybersecurity issues. The more data you put in
one centralized data set, that's a big honeypot that bad
actors are going to want to go after. And so
how we navigate this tension of trying to get more
and more data to further train and improve these AI

(21:17):
systems versus recognizing the very real risks posed by that
centralized data collection is a really wicked problem that we're
going to have to sort out over time. But I
don't think that our current framework really allows for that
meaningful conversation, and so we need to take a more
holistic approach of asking what is it that we want

(21:38):
from AI? Do we want it to really disrupt systems?
Do we want it to really be used in its
most transformative fashions? If so, then we need a holistic
revision of some of those underlying principles.

Speaker 4 (21:51):
Thank you for all that. That's very insightful.

Speaker 5 (21:53):
On the federal level, Now let's turn a little bit
to the states as well. I know there's a lot
of debates regarding federal redemption and a federal law and
whether or not there should be AI more or less
as the need is calling it a ban on a
ban on federalism as I saw when headlines day, but
something that would preempt to this patchwork of state laws

(22:14):
that would be a hindrance to the development of AI. Jennifer,
could you touch a little bit on what's going on
at the state level.

Speaker 7 (22:21):
So lots going on at the state level. If we
just look at AI focused bills. Over one thousand bills
related to AI were introduced in the state legislative session
in the last six months, so we are seeing a
large amount of conversation around AI at a state level,
and these of course vary from things like what we
saw in Colorado that say quite comprehensive and in my opinion,

(22:45):
concerning regime where you could potentially have one state effectively
regulating AI more generally because of requirements around either data
usage or requirements around model size or things like that.
There's another element though, as it relates to the role
that states may have when it comes to AI, and

(23:06):
that's what we've seen with the emerging state data privacy patchwork.
So we're up to over nineteen states that have some
sort of comprehensive consumer data privacy law, all of which
differ from one another slightly, and even if they were
the same law might have different interpretations when it comes
to things like artificial intelligences interaction with these state laws.

(23:31):
So you have two really risky patchworks starting to emerge.
And the reason that these patchworks, in my opinion, are
potentially risky is that they cause confusion both for innovators, developers,
as well as for deployers, so companies that may be
looking to use AI in beneficial ways or to help

(23:52):
their customers, and also for the individual users, the consumers
whose data is being used. I'm based in the DC area,
and if you live in an area like that, you
know that it's quite possible that you could be, for example,
on a zoom call with one member in Virginia, one
person in the district, and one person in Maryland, and

(24:14):
all three of those would have different data privacy regimes.
That's confusing enough in kind of the traditional Internet era.
When we start to get to a world where AI,
which is so dependent on data and data usage, is
now having to also navigate that, it gets even more confusing.
So while I often think there are a lot of

(24:35):
things states can do to really be the kind of
laboratories of democracy to provide a good soil for innovation
to really grow and flourish, when it comes to things
like data privacy, like that kind of model level of regulation,
we're really going to need a federal framework so that
there's some degree of certainty for this innovation to flourish.

(25:00):
That's also going to really be necessary when we start
to see the interactions between US law and the laws
that we're seeing, for example in Europe or for some
of those developments as Pam mentioned, of informal norms of
industry based regulation and kind of protocols over perhaps some
of these regulations, if state laws are going to potentially

(25:23):
require certain things that can make it difficult for a
company to take a certain approach, that can be really
disruptive or could result in one state not having the
benefits that are available to other consumers.

Speaker 4 (25:37):
Thank you for that, Jennifer.

Speaker 5 (25:39):
And so I'm going to sort of Pam now just
to kind of ask a broader, more general question about
how we think about data protection practices and how that's
kind of evolved as AI has evolved, or at least
as it's become a lot more popular. And so from
your perspective and your experience working regularly with both like
those who are impacted by the regulations and the regulators,

(26:00):
how has it changed the way that we kind of
fundamentally think about data privacy practices.

Speaker 3 (26:06):
Yeah, So, if I may just slightly redefine AI just
a wee bit, I think it's really important to talk
about general to AI and also large language models. But
we really have to also talk about what Iver called
deep learning models because that's a lot of the infrastructure
of AI. We can't neglect it. That's more predictive analytics,

(26:29):
not quite the same, it's a little more infrastructure oriented.
So the data privacy question is quite complex in the
United States in particular. It's actually complex everywhere, but there's
more certainty in about one hundred and sixty countries now
because they have regulatory structures that are more harmonious with

(26:49):
each other. They work across borders more easily. Kevin's correct.
The sectoral structure that the US works in is very
difficult in an era of AI. AI is difficult in
regards to privacy practices no matter what, but with a
sectoral structure it gets very, very difficult.

Speaker 4 (27:11):
FURPA.

Speaker 3 (27:12):
We have a very large report out about FURPA. We
wrote it. Let's see, it published right during the pandemic,
as I recall, and it's everything you want to know
about FURPA is in that report. It's called without Consent.
FURPA is almost I mean you kind of it's almost
as if FURPA and some of the sectoral laws live

(27:34):
in a different world than kind of where we are
right now. So what I would say is that there
are some parts of some of the sectoral laws that
do apply, especially the Fair Credit Reporting Act, because the
Fair Credit Reporting Act does regulate credit scoring, and credit
scoring is a subset of AI, and it does work.
There are certain parts of other laws that apply to

(27:57):
biometric use. Biometrics are also a subset of AI, so
that also applies. So you see these you see these
bits in pieces, so like a subregulatory patchwork, if you will,
that applies to AI. I would also say that some
of the state laws that have been passed around AI,
the definitions of AI are really all over the place,

(28:20):
just definitionally, it's really difficult to find cohesiveness, and all
of these things I think can be corrected over time,
But right now, I think it'd be really tough to
just wave a wand and say let's pass this really
giant AI regulation and we're going to get it right.

(28:41):
I think we're in a really important transitional time, and
I don't think that everything that is going that we
want to be settled is going to be settled quickly.
It's going to take time and experimentation. And let me
give you a really good example of this. So The
FDA has a lot of very important AI projects going

(29:06):
on with medical devices. They have some practices and some
protocols in place, and basically you see a lot of
scientists there working through how it works. So, for example,
there's an amazing FDA study on medical devices and how
AI fit, in other words, how well the algorithm fits

(29:28):
the device, how the AI fit is working within patients.
This is incredibly important, incredibly sensitive work. Does it Is
it privacy?

Speaker 5 (29:39):
Yeah?

Speaker 1 (29:40):
It is?

Speaker 3 (29:41):
But is it also quality assurance? Yes it is. It's
more complex now. The patient in this instance definitely has
a privacy interest in that data. There's also a quality
assurance interest in that data. There's also a broader public
interest in that data because if we can get algorithms

(30:02):
within certain medical devices to have good fit and to
have what's called good noise reduction, then we can have
really great medical outcomes, so we have a lot of benefit.
So how to balance all of these interests is really
really the heart of the question that we're looking at.

(30:23):
And I think what's really important here is to be
very patient with all stakeholders and to really hear from
everyone and to try to get as much information as
possible and be really fair about it. We need to
hear from scientists about certain contexts. We need to try

(30:43):
to not be just completely broad and everything we're saying,
but to really look for specific contexts and try to
work those contexts and get those contexts right. There are
some contexts which are much easier than others to get
privacy right in. There are some really tricky cone texts
to get privacy right in. One of the trickiest that

(31:04):
we're really looking at right now, certainly health is one,
but another one is identity. You know, identity ecosystems are
very sensitive. The United States does not have a national
identity system like most of the rest of the world does. So,
for example, a lot of the world has something called

(31:24):
identity authorities that have a ministerial or cabinet level position
to do nothing but adjudicate identity in that jurisdiction or country.
We don't have something like that. We also don't have
a system that is guided by a law that specifies
how people can and cannot use identity. We have facsimiles

(31:46):
of pieces of that. But for those countries with identity
authorities and strong identity laws and strong comprehensive non sectoral
privacy laws. You're seeing these countries tackle the identity privacy
problem as a first and foremost problem. We're not doing
that here, But if you really think about how AI works,

(32:08):
we are coming up on a really big identity issue
and in terms of privacy, how do we handle that Well,
it's going to depend on the context. So each of us,
in our contexts that we're working in, we need to
really think about how do we solve the problem right
in front of us. I think that we have quickly

(32:29):
moved past the point where we could just launch a
silver bullet piece of legislation into the stratosphere and say, okay,
we just fixed.

Speaker 4 (32:37):
AI with this.

Speaker 3 (32:38):
No, I don't think that we're there anymore. AI is
too complex, it's too suffused into the deeper infrastructures and protocols,
technical protocols. So we're going to have to really look
at contexts, really get the use cases, and work from
that kind of basis. That's a personal opinion. I'm I'm

(33:01):
interested to hear other ideas, but that's where I think
we're going, and I think that's where the rub will
be and I think through looking at various contexts will
develop what the new protocols will be specific to data
privacy and data protection and data governance.

Speaker 2 (33:18):
There you go, thank you, thank you.

Speaker 4 (33:21):
For all that.

Speaker 5 (33:22):
And that raises some interesting points and some points that
are probably best addressed by a different panel. But you know,
you have all these other external policy interests have nothing
to do with privacy that you know, kind.

Speaker 4 (33:32):
Of lend itself to the ID problem for example. So
there are a.

Speaker 5 (33:36):
Lot of political pressures and other like overlapping policy issues
that are totally outside of the privacy or AI spere.

Speaker 4 (33:44):
So we kind of open up to the.

Speaker 5 (33:45):
General panel, and so we can talk a little bit
about like, you know, if we're just start from you know,
the ground level, we didn't have any of the federal
privacy laws now on the on the books, or any
of the statements that are passing. What are some either
you know, kind of rules of the road or just
in general considerations surrounding like where do we start. Let's

(34:05):
take like data retention and poletion practices for example, I
think that would be a good place to start. Is
where as well as like the data maxtimization and minimization
and localization. I think we should start with data.

Speaker 2 (34:18):
Yeah, I'm happy to jump in here. I have a
paper forthcoming in the regulatory review that I'll be sure
to send out on the various socials theorizing about what
it would look like if we could donate data like
we donate blood, so in the same way that we
really champion and celebrate people who share valuable information for

(34:40):
the collective good, or excuse me, share their blood for
the collective good, and bank that as a resource that
can be used for people in MARIAID context, what if
we could do that with data. I go run around
town Lake here in Austin, and I record tons and
tons of data on this garment watch. Garman knows my
heart rate, it knows what I run, it knows pretty

(35:01):
much why I run, It knows everything about it. If
I could share that information with let's say a public
data bank, think of it like a blood bank, right.
If I could share that information with a public data
bank that could then share that information with let's say,
nonprofits that want to analyze how we can better monitor

(35:22):
public health for community residents, or with the City of
Boston so that they can look at how we're going
to re engineer certain traffic patterns, or where we need
to build out a new trail system that would be
incredibly valuable not only for the residents of Austin for myself,
but also for future generations to be able to use
this information in a way that's training some of this

(35:45):
more public oriented use cases of AI. And what's frustrating
to me about relying on the status quo is we've
just all gotten pretty dang apathetic when it comes to
our data sharing norms and practices. We give so much
information to private stakeholders who don't necessarily have a direct

(36:05):
benefit or a direct intention of redirecting that data towards
something that is going to definitively and clearly help us.
And so I really think we should start to consider
what it would look like for a data donor model
to develop where you're seeing the sharing of information in
a way that directly benefits you, your family members, and

(36:26):
your community. I think that's one fruitful place to start.
Another way to start is to really start investing more
in basic AI research. So, something that we talked about
in the prep call here that Jennifer pointed out, or
excuse me, that Pam pointed out, is that there's really
been some impressive technical jumps with respect to this idea

(36:48):
of machine de learning. So how do you take information
that the model has been trained on, and let's say
that a consumer says, hey, actually, I'd like to opt
out of the AUDEO being trained on my information. Well,
as it stands right now, that's a very complex, very
difficult technical challenge. If we can instead invest in some

(37:08):
of this basic research into how AI is working, into interoperability,
into that explainability of AI, well, then now we can
actually craft privacy laws that are more responsive to when
we think it is proper and when it may not
be proper for AI models to train on certain data
use cases. The issue right now of trying to put

(37:30):
something into hard law of rewriting and privacy laws around
AI is that we may unintentionally set up AI for
a certain technological path dependence. The laws we write right
now could end up sending us in an unintended direction
that forecloses us from realizing certain beneficial privacy practices or

(37:53):
privacy norms among AI companies. One way to point that
out is that in some of these state regulatory proposals,
there is a limitation or a goal of, for example,
preventing or only allowing explicitly neutral AI models, models that
are free from any bias. Well, that sounds great in theory,

(38:16):
but it may be that by virtue of experimenting with
different models, we can find the model that's actually better
at detecting bias, that's better at detecting instances of discrimination.
And so now's the time to really pause, as you
pointed out, Ashley, and think about this from a more
holistic and future oriented perspective. And really I think the

(38:39):
emphasis should be on doing more technical research before we
try to encapsulate new privacy laws and ends.

Speaker 4 (38:47):
Up in with one of the panels.

Speaker 5 (38:49):
That's sorry when the audience questions right here, we're just
because it's kind of remained in this part of the
discussion and going forward and relevant now. But one of
the today's ask whether or not their rists to Congress
pursuing the separate paths or the part privacy corporance and
privacy legislation of comports of AI legislation at the same time,
to the two efforts to be combined or should they kept,

(39:10):
you know, be kept separately, because otherwise down the road
we can put the two paths on a collision course.

Speaker 3 (39:15):
Potentially I'm going to jump in if if I may,
in regards to the legislation, I I think that whatever
is done needs to be done based on empirical data,
uh not on emotion. And it's really got to be
factually based, and there should be use cases that are

(39:37):
very specific that guide the legislation, and the legislation whatever
proposal is developed needs to be tested in ground truth.
It just can't be this written law that is not
ground tested against what's actually happening in reality. And I
think that that's just a really really important thing to do.

(39:59):
There there are raging debates about whether or not to
keep privacy legislation on its own or combine it with AI.
I think I'm going to avoid waiting into that because
I see arguments on both sides. Yes, But if I may,
I'd like to follow on with what Kevin was saying.

(40:21):
You know, if you had to start with nothing as
a framework, I think one of the things you'd really
need to look at is the entire body of work
around de identification of data or reduction of identification of data.
That is one of the impediments right now is the
propensity and the ability to pretty easily reidentify data sets

(40:46):
in ways that are unexpected for individuals and also for
groups of people. There are some really interesting National Institute
of Health studies that came out in twenty twenty three
in regards to what's called collective privacy and what's called
broad consent. It's really really difficult in certain contexts to

(41:09):
be able to give consent for absolutely everything. So when
that happens at scale, which it does in the AI context,
what on earth do you do about consent? How does
that begin to look and how does that begin to work?
If you start to, you know, work on this as
almost like a mathematical equation, you have to work through

(41:33):
de identification processes, you have to work through prohibitions on
reidentification in certain context or for certain purposes. That's work
that is really not really advanced enough for what we
need in this particular. You know, deep learning ll M

(41:54):
advanced era of AI, and it's just getting more and
more advanced. So I think that's super important. Part B
of what I would say here is that we've got
to be really careful when we're thinking about privacy. We
need to remember that someone is going to adjudicate this privacy.
There is going to be someone or some entity enforcing

(42:15):
whatever law is created. So how is privacy law at
scale and at an AI speed going to be effectuated.
That's a really big question that I don't hear a
lot of people talking about. I really think that the
best exemplar there really are some of the super large

(42:37):
financial systems that are operating with literally petabytes of data
and they're doing so in real time, and they're also
checking their systems in real time, and I think this
is very, very helpful. You see this in anti fraud systems,
and you see this in additional systems that are operating

(42:58):
across borders. This kind of instant checking is being done
in other areas. It will have to also be done
in the area of privacy because it is going to
be AI governance tools that are used to adjudicate privacy.
It's not going to be people in beautiful suits and
ties and dresses and you know whatever in rooms. It's

(43:23):
not going to be attorneys in rooms adjudicating privacy on
a one on one basis or you know like this.
It's going to be digital systems AI driven adjudicating this
at scales. So if you look at any major multi
national corporation today, they're going to have some kind of
tool that's looking at how their data flows are working

(43:46):
and if they are complying with current privacy laws. This
is how you see automated data breach notifications. It's how
you find how companies can find various flaws in their
own programs. These are not they might be checked by hand,
but really, when you're really looking at AI scale and speed,

(44:08):
this is going to be very automated. So one of
the big questions we have to answer in privacy going
forward is how will privacy be automated and how can
we make it effective? And how can we be sure
that the automation is working the way we want it to.
So these are big questions.

Speaker 7 (44:27):
I want to get back to the audience's original question
with regards to the fact that we have an ongoing
US federal data privacy debate and we have an ongoing
US federal AI policy debate, and should these be the
same debate or two separate debates, And I think it's
the short answer is it's a bit of both, and

(44:48):
it's it's complicated. These like any almost any conversation these days,
should be having a conversation about what the impact of
AI is on the underlying law a lot of the
conversations around consumer data privacy have been based on the
Internet era, and as we said at the beginning, some

(45:09):
of those presumptions have changed. Some of the ideas would
be more difficult when we're talking about llms, when we're
talking about the potential of deployment in certain industries or
things like that, and that oftentimes we have a bit
of a pacing problem that can be also a pacing benefit.

(45:32):
Sometimes if we allow the technology to evolve, we will
see where are those areas that we actually need clarity
and response versus where are those things that we were
concerned about them six months ago that we've actually found
that social norms or innovation itself are better suited to
solve or that our existing laws already cover. Likewise, there

(45:55):
are likely to be times or there needs to be
a bit of a deregulatory conversation. Are there things and
existing privacy laws that in some of those sectoral specific
laws as Kevin menton, that make it more difficult to use,
for example, AI tools in cybersecurity or AI tools that
could actually be more d and on more anonymizing of

(46:16):
the of the data than than was previously seen. So
I think these two conversations don't exist in a vacuum,
but I don't think that's unique just to AI and
privacy at this point. I testified last week in the
House Financial Services Committee Subcommittee on Financial Institutions around data
privacy in the financial institutions sector, and this is one

(46:41):
of the conversations that that certainly came up is how
does this data that's traditionally been regulated potentially have potentially
interplay with existing laws like g l BA. What impact
does this have, particularly owned small businesses? And I think
this is going to be something that we see many

(47:02):
other committees that have traditionally had these sectoral data privacy
laws grappling with. At the same time, it's likely in
the context of AI and AI policy that many of
the concerns that consumers have or that policy makers have
are related to their concerns about privacy. You know, how

(47:24):
where is the data being coming from? Has there been
meaningful consent? What does this mean in terms of some
of the practices around AI. So that I think that
there is likely to be a privacy conversation within the
AI conversation as well. And so I think that that
of course these two proposals, if we see them, or

(47:45):
however these policy conversations evolve will certainly interact. I think
when we think about the US's current approach to data privacy,
there have been a lot of statements around these sectoral
laws already about their applicability in the AI context, whether
it's from the agency administering them or beyond. And I

(48:06):
think there will also be kind of continuing questions around
whether or not these existing authorities are sufficient if there
are concerns that arise over.

Speaker 4 (48:16):
A that's another broader kind of question.

Speaker 5 (48:19):
We've been talking a lot about sectoral concerns and state
versus federal, But what about the privacy laws that I mean, well,
not privacy laws, but laws at other agencies that are
not sectoral, that are kind of broader and behavioral base,
such as the Federaltary Commission. And you know, I'm very
practices and you know, things such as like legitimate fraud
and spam and that sort of thing, like where do

(48:40):
those pieces fit into the puzzle here? And where do
you see that going?

Speaker 2 (48:44):
Yeah, I think one really important thing to flag about
some of our unfair and deceptive acts or practices statutes
that exist at the federal level and at the state
level is there's no AI exception. Those laws apply to
any technology, and AI is included. And so when you
think about AI companies, for example, that may be misrepresenting

(49:07):
what data they're collecting, or when you think about AI
companies that may be using some nefarious practices to try
to hoover up more information about you than you intended,
those are still subject to potential litigation under state u'ed apps,
UH and the FTC in many contexts. And so we've

(49:27):
already seen, for example, in here in Texas, our age
has been on the edge of applying the state's d
app to some of these egregious practices that AI companies
have been using to perhaps misrepresent just how sophisticated their
AI tools may be, as well as what information these

(49:48):
AI tools are using. And at the FTC level as well,
we know that Chair Ferguson is very attentive to the
fact that we've seen historically big tech companies use their size,
use their network effects to perhaps embrace some sketchy privacy
practices that would otherwise not have been allowed if they

(50:09):
were a smaller company or didn't have the sort of
corporate power that they're able to use so I think
when we look at these broader consumer protection statutes, it's
important to realize that they're still very effective, and they're
being used already in the States and at the federal
level to help protect consumers. And so i'd encourage people

(50:29):
to pay attention to how those various enforcement actions evolve
over time, and also encourage folks to start sharing when
they feel like an AI lab might be acting in
a way that doesn't align with their expectations. One example
that I'd particularly like folks to be aware of as
we enter and can continue to progress in the so

(50:52):
called Year of Agents twenty twenty five, a lot of companies,
including Open Ai, including Goo, Google, and soon including Anthropic,
are developing AI models with better and better memory and
better and better retention of your prompts. So you will
be able to either directly ask a model to retain

(51:15):
certain information about you, or that model just might start
inferring and saving information about you. And so we need
to get out in front of thinking through from a
consumer's perspective, what are the norms, what are the standards
we want companies to adhere to when it comes to
knowing just about everything about you and not only knowing

(51:36):
what information you've entered into that model, but then critically
inferring things about you that it will retain over time
and bake into its outputs over time.

Speaker 7 (51:46):
Oh, I want to jump in here on kind of
the existing agency point. I think that you know, I
echo Kevin in the in many cases existing you know,
if you're committing fraud, we already have laws around fraud.
Whether that is your AI is being used to to
facilitate the fraud, then the bad actor using the AI

(52:09):
probably is not exempt from that if it's fraudulent claims
about the AI itself and what it can do. Again,
in many cases, we have existing laws that would cover that. However,
the caution I would issue there is we have seen
at times agencies try and take the fact that they
have authority over some element and expand that to be

(52:31):
a much broader authority around a technology like AI or
something like that, And I would would caution the agencies
or caution states to think very carefully about that, particularly
in light of many of them. Of the things that
I know we've discussed on regulatory transparency webinars in the
past around potential agency overreach, around non delegation doctrine around

(52:55):
you know, the difference that may or may not be
due agencies and a post loop or right world that
you know, an agency shouldn't just wake up one morning
and decide that it is the regulator of AI. That
being said, we have seen agencies take proactive steps at
times in positive ways. An example I can think of
as well that doesn't often gets thought about in the

(53:17):
AI context are questions around driverless cars, and that as
an application of AI, where we've seen the Department of
Transportation having these conversations for a decade or so now,
and that there probably will be privacy conversations related to
that as well. What is your cards? What data can

(53:38):
your car use? How can that be shared or not
shared with an insurance company or with the government or
things like that, And so we certainly will continue to
see the CEAI actions at agencies. I would just caution
that before an agency wakes up one morning decides that
it is our federal or an individual states AI regulator,

(54:00):
that it remembers how delegation works.

Speaker 3 (54:05):
I think what I would say to the issue is
that it's really important to remember that we're in a
transitional time with AI. Some laws will still apply, and
it's really important to articulate what those laws are. Certainly
the unfair and deceptive acts and practices at FTCX Section five.
These still have huge applicability to different aspects of AI.

(54:26):
But if you really think about AI not as a
thing unto itself, but as part of the plumbing of
our infrastructures, I think that will really help because that's
really what we're looking at happening within the next five
years or so, maybe sooner. So it's really a very

(54:47):
very deep digital public infrastructure. And because of that, it
makes it very difficult to say, well, you know, one
particular law will apply to all of AI. I think
we're really looking at a situation where we have to
look at context very carefully, not necessarily sectors, maybe sectors.
It's going to depend on how this works out and

(55:09):
on how the conversations go on this. I can't predict that,
But what I can say is that, yes, right now,
there are definitely statutes that apply, and there are many
that do. It applies to pieces here and there, and
to use cases here and there. So we're really seeing
an incredible time of chaos and transition, and it's going

(55:32):
to be bumpy for a while, until things settle down,
until the digital public infrastructure is fully built out and
the networks start talking to each other harmoniously. But in
the meantime, I think it's really important again to go
back to the technical research, the scientific research, the policy research,

(55:53):
looking at what the user experiences are and really basing
whatever happens, whatevers we have with multiple stakeholders and with
ground truth reality, not just opinions. It's just super important
that we get this right.

Speaker 4 (56:10):
Anything else to briefly add, we have four minutes here
that covers it.

Speaker 5 (56:15):
If I can ask some fun questions, what are your
favorites or least favorite AI tools?

Speaker 4 (56:20):
What's useful that you've found out there.

Speaker 3 (56:24):
I'm going to jump in first here. I think one
of the real dangers with some of the AI tools,
and I'm thinking of governance tools right now, but one
of the real dangers is when you take a tool
that was created for one purpose and start using it
for all sorts of other purposes. So, for example, line
and shop algorithms were made for very specific purposes, and

(56:45):
they're being built into so many mldev ops systems and
AI governance tools as like this thing that will test
for fairness across all sorts of contexts where they were
never intended to. And we really when we're using AI tools,
we've got to have metrology in mind. We need to

(57:06):
if we're going to use an AI tool, we need
to have a system that says, Okay, this is a
high quality tool or not. We've got to figure out
how good the tools are and be able to adjudicate
and test those tools themselves. We need some fail safe
in the actual tools, So yes, AI tools. Yes, testing

(57:29):
of the AI tool to make sure that what we're
using is effective and fit for purpose and the purposes
that we intended it for.

Speaker 7 (57:36):
Thanks, it's going to give some what I think are
really fun ones. I appreciate PAMs very serious answers because
I think it's also important to remember what some of
the everyday benefits that we're experiencing with AI may be.
For example, the fact that many airlines now are using
AI if you're at risk of missing a connection to

(57:59):
help identify and rebook you on a flight, or to
let you know and even hold planes and sometimes if
there are several individuals rushing to make a flight. I
know people have had problems with this technology as well.
I'm not saying it's perfect, but it's an example of
how you know a set of data is being used
to benefit a lot of individuals in a potentially stressful situation.

(58:22):
I think one of the cool features we've seen people
doing is using chat GPT, for example, to take a
picture of what's in your fridge and have it give
you suggestions of what recipes you could make for dinner.
I think a lot of people find that to be helpful,
and particularly if you're someone who may be trying to

(58:44):
meet specific dietar requirements or trying to deal with some
sort of limited resources, that can be hugely beneficial. And
then you know, we've seen some really cool things around,
for example, being able to do more virtual tryons that
are truly personal when you want to see how an
item looks in a room or how a piece of
clothing looks on you before you order it, being able

(59:09):
to do that from a comfort of your home, and
ways that actually let you get a better idea. It
may not be as perfect as going to the store,
but it's the type of thing that I think many
individuals have had that experience of ordering three different things
and then having to send ones back or not being
sure what it will look like when it's not on

(59:30):
the model or not in the showroom, and really having
that kind of opportunity to just better understand the purchases
you're me.

Speaker 2 (59:38):
And I'll just add very briefly two things. So number
one just my favorite line ever. If I can leave
anyone with one piece of advice or one note, this
is the worst day I we will ever use. AI
is only going to get better. The issues we're experiencing
will decrease over time, so just keep that in mind.
And for those who want to have a little bit

(59:59):
of fun, especially given that we are approaching a celebration
of seventeen seventy six, I wrote up a prompt that
folks should check out on my substack Apple Seed AI.
You can just enter in a historical document and it
will spit out a translation of that for whatever audience

(01:00:20):
you're intending. So today I broke down the Declaration of
Independence for fifth graders, and among my favorite lines is
the king has been mean. So check that out if
you want to have a little bit of fun of
breaking down everything from the Northwest Ordinance to any other
fun historical document.

Speaker 5 (01:00:40):
I actually recently as Google Gemini to summarized Wicker and
the Philburn and the tone of a Trump social media post,
and that was pretty fantastic too, not to protective.

Speaker 4 (01:00:50):
Youw some of those. So that's we're a particularly one
o'clock hour.

Speaker 5 (01:00:54):
So and to thank all of our panelists and thank
you to the Federal Society for hosting us today.

Speaker 4 (01:01:00):
That was a really great conversation. I feel like we
couldn't come.

Speaker 5 (01:01:03):
We can cover all of this in a day if
we tried, but we did a pretty good to stop
with the hour that we had wonderful.

Speaker 6 (01:01:08):
Thank you so much Ashley for moderating, and thank you
again to everyone on our panel for joining us and
for sharing your insights today. Thank you also to our
audience for tuning in and for your excellent questions. For
more content like this from the Regulatory Transparency Project here
at the Federal Society discussing the regulatory state in the
American wave life, please visit us at reg project dot org.

(01:01:28):
That is our egreg project dot org. Thank you on.

Speaker 1 (01:01:32):
Behalf of the Federal Society's Regulatory Transparency Project. Thanks for
tuning in to the Fourth Branch podcast. To catch every
new episode when it's released, you can subscribe on Apple Podcasts,
Google Play, and Speaker For lays from our TP, please
visit our website at reg project dot org. That's our
egproject dot org.

Speaker 3 (01:02:00):
This has been a FEDSC audio production.
Advertise With Us

Popular Podcasts

Fudd Around And Find Out

Fudd Around And Find Out

UConn basketball star Azzi Fudd brings her championship swag to iHeart Women’s Sports with Fudd Around and Find Out, a weekly podcast that takes fans along for the ride as Azzi spends her final year of college trying to reclaim the National Championship and prepare to be a first round WNBA draft pick. Ever wonder what it’s like to be a world-class athlete in the public spotlight while still managing schoolwork, friendships and family time? It’s time to Fudd Around and Find Out!

Crime Junkie

Crime Junkie

Does hearing about a true crime case always leave you scouring the internet for the truth behind the story? Dive into your next mystery with Crime Junkie. Every Monday, join your host Ashley Flowers as she unravels all the details of infamous and underreported true crime cases with her best friend Brit Prawat. From cold cases to missing persons and heroes in our community who seek justice, Crime Junkie is your destination for theories and stories you won’t hear anywhere else. Whether you're a seasoned true crime enthusiast or new to the genre, you'll find yourself on the edge of your seat awaiting a new episode every Monday. If you can never get enough true crime... Congratulations, you’ve found your people. Follow to join a community of Crime Junkies! Crime Junkie is presented by audiochuck Media Company.

24/7 News: The Latest

24/7 News: The Latest

The latest news in 4 minutes updated every hour, every day.

Music, radio and podcasts, all free. Listen online or download the iHeart App.

Connect

© 2025 iHeartMedia, Inc.