Episode Transcript
Available transcripts are automatically generated. Complete accuracy is not guaranteed.
Speaker 1 (00:03):
Go ahead and get started this morning. Thank you all
so much for joining us on day two of this
terrific conference. Just a bit of housekeeping before we get started.
If you're logging onto the Wi Fi, the password courage
in action is all lowercase. Even though it's printed in
all caps, it is case sensitive, so you can all
(00:24):
be well connected while we're having our discussion this morning. Okay,
so thank you for joining us. My name is Sara
bishnebut I'm a professor of law and director of the
IP and Information Law Program at Benjamin and Cardozo School
of Law in New York, and I'm delighted to be
hosting this terrific panel, which is going to proceed roughly
(00:48):
in the order that we are seated. So to my left,
please to introduce Professor Eugene Voloch, who is the Thomas
Msibel Senior Fellow at the Hoover Institution and the Gary T.
Schwartz Distinguished Professor of Law at UCLA School of Law.
To his left, perhaps metaphorically but also perhaps literally, there
(01:09):
is Professor Christina Mulligan, is a professor of law at
the Brooklyn Law School. To her left, Professor Greg Dickinson.
He's an Assistant Professor of Law at the University of
Nebraska Lincoln College of Law. And all the way to
the left is mister Druva Krishna, who's a visiting jurist
and an attorney, a visiting jurist at the UCLA School
(01:29):
of Law and a practicing attorney.
Speaker 2 (01:33):
In the space of.
Speaker 1 (01:36):
Financial regulation and technology. So our remarks will be somewhat
wide ranging. It is now noon on the East coast,
so we are here freshly armed with hot takes from
the recently concluded oral argument in TikTok versus Garland, which
was argued this morning at the Supreme Court. But of
course there's much more to this debate than TikTok alone.
(02:00):
The regulation of algorithms is largely framed in terms of
either misinformation and disinformation on the one hand, or the
ability of concentrated sources of private and perhaps public power
to shape the platforms and avenues for that public discourse,
(02:20):
and what a fact that might have on a robust
marketplace of ideas. So that broader remit is really what
we're here to engage with. And so, without much further ado, Professor.
Speaker 3 (02:33):
Follock, thank you, thank you very much for having it
This is a very interesting, interesting subject and always a
conference that I'm pleased to be at. There's an epigraph
to my presentation. It's one of my favorite poems. I
also used it in my torts class for reasons which
(02:54):
will be apparent shortly.
Speaker 2 (02:58):
But I think it works well for a lot of things.
Speaker 3 (03:00):
So this is from a phone called Him of the
breaking strain. It ends up being about human nature. I
don't know how well it works, but it starts out
by being about civil engineering. This is a rare subject
for a poem, but a very important one. So let
me just give you the first stanza. The careful textbooks
measure let all who build beware the load, the shock
(03:22):
the pressure material can bear. So when the buckled girder
lets down the grinding span, the blame of loss or
murder is laid upon the man, not on the stuff
the man. So the tort's application is pretty clear, right,
one of the things that we actually learn, and I
think surprises us at times in our first year of
(03:44):
law school. But I think ultimately most of us come
to agree with it like it actually starts out.
Speaker 2 (03:50):
I think, not so much with civil engineering, but with
things like shipping.
Speaker 3 (03:53):
Like let's say, let's say some ship is destroyed in
a storm.
Speaker 2 (03:58):
You might say it's nobody's fault, it's a storm.
Speaker 3 (04:01):
But if you think about it, ships are supposed to
withstand storms, and the course of ships is supposed to
be laid to avoid storms. The blame of loser murder
is laid upon the man, not on the storm, not
on the stuff.
Speaker 2 (04:15):
Oh, the sail.
Speaker 3 (04:19):
Collapsed, well, probably collapse because it wasn't properly put up
or properly maintained, or what have you. So this is
very much a principle, of course, of morality in Kipling's telling.
But also I think of law, and my claim is
(04:39):
that as a general matter, when we think about algorithms,
we too need to look past the stuff, however flashy
it is, and to the people who actually are responsible
for them, in some measure, in part because they create them.
Now already, the careful reader might say, but wait a minute.
(05:00):
One of the things we know about the hyper modern algorithms,
the algorithms of kind of the AI machine learning world,
is the careful textbooks don't really measure, don't really explain,
maybe don't even really know, not even the creators really
fully know how the algorithm operates. That's one of the
(05:21):
things that gives the algorithm its immense power that as
well as creating new hazards, that is that it doesn't
have to be kind of all the decisions about the
algorithm are being made by humans. Obviously the initial decisions
are made, but then that leads to things that look
at least very much like decisions that are made by
(05:42):
the algorithm.
Speaker 2 (05:44):
So one question is what to do about that?
Speaker 3 (05:48):
But I take it we'd agree that if a self
driving car. I love these way moods, by the way,
they are totally the future. But when a self driving
car hits you and you sue, First of all, you're
obviously not.
Speaker 2 (06:03):
Suing the car. You're suing Google if it's way more.
Speaker 3 (06:06):
And second, Google will not be heard to say oh
I mean I say will not be heard in the
legal sense. There probably wouldn't actually be heard to say
it because they would realize it's a foolish thing to say.
I've heard some people say it about them, but I
don't think they would be brazen.
Speaker 2 (06:22):
Enough to say it.
Speaker 3 (06:22):
Oh, it's not our fault, it's the algorithm's fault.
Speaker 2 (06:26):
Right, You created the algorithm.
Speaker 3 (06:28):
You chose to let it loose upon the world you
want to be liable.
Speaker 2 (06:32):
So one consequence of that is.
Speaker 3 (06:35):
That, for example, when we're talking about AI and libel, well,
then which is it's an article I wrote a couple
of years ago, and there are actually two pending cases
in American courts.
Speaker 2 (06:46):
I mean to do libel by AI?
Speaker 3 (06:47):
Where really AI hallucinated clearly false statements? I think the
answer is that the AI company is utter ultimately responsible
for what it is that its algorithms put out, even
if it didn't consciously decide to say these things. No
human there consciously decided to say these things about this
(07:10):
particular plaintiff.
Speaker 2 (07:11):
It ought to be responsible.
Speaker 3 (07:13):
Now to be sure, maybe responsible subject to the existing
libel or rules, which may in fact turn on things
like mental state. Then we have to figure out how
to calculate the mental state. Would have to be the
mental state of the humans, I think. But then the
question is what mental state do they have with respect
to the algorithm. So maybe if it is a requirement
(07:35):
of knowledge or recklessness, maybe there has to be showing
of actual notice to the company that the algorithm is
doing something. Whereas if it's negligence may be enough that
they programmed it carelessly.
Speaker 2 (07:46):
So there are complications, but.
Speaker 3 (07:47):
I think as a general matter, I don't think we
could say, well, no liability because it's just an algorithm.
Speaker 2 (07:52):
Well, yeah, it's just an.
Speaker 3 (07:53):
Algorithm that you guys put out there and you are
responsible for.
Speaker 2 (07:58):
So here's the second question.
Speaker 3 (07:59):
What about rights do companies or let's say there is
a law that limits limits algorithms the use of a
particular algorithm. There was some suggestion in net choice cases,
then maybe the result ought to be different. If this
is all if the output is determined algorithmically, well, one
thing that some people point out is that everything's an algorithm,
(08:19):
right even before the era. The algorithm chronological or verse
chronological order is an algorithm.
Speaker 2 (08:27):
But okay, so maybe we're using the termine precisely.
Speaker 3 (08:30):
Let's say we're talking about algorithms where it's not clear
or where the creators do not actually fully know what
the algorithm would all end up doing.
Speaker 2 (08:38):
Maybe that's what people are getting at.
Speaker 3 (08:40):
But if that's so, I think the answer remains that
if that if I want to create a news service
and I write some algorithm that prioritizes certain things over
other things, whether based on how popular they are, or
how much I like their ideology or how much you,
as the U user, have shown that you like their ideology.
(09:03):
I think that's still ultimately a decision on my part,
and just as I should be responsible for it, I
think I should be entitled to do it.
Speaker 2 (09:10):
So those I think are.
Speaker 3 (09:11):
The two basic big picture principles about algorithms. We're obviously
going to want to get into a lot more detail,
but I just wanted to start my few minutes just
with giving giving this big picture.
Speaker 4 (09:28):
Yeah, So what I wanted to talk about specifically with
this group was the question of how people who maybe
operate from a libertarian philosophy or an anti regulatory perspective
should think about the regulation of algorithms in a context
where a lot of there's a lot of power concentrated
(09:50):
in just a few companies about what people maybe see
on social media platforms or experience. So, you know, that
question might seem a little funny to start because your
instinct might be, well, surely the answer is they don't
like the regulation, right, But in the last few years,
you've seen a lot of people that previously have been
very opposed to government intervening in what Facebook shows you
(10:14):
or things along that line, starting to revisit that and
asking themselves, well, has the situation changed enough that this,
you know, if the government is acting to regulate on
private entity in this way is bad?
Speaker 5 (10:27):
Should we revisit that?
Speaker 4 (10:28):
And so I wanted to kind of explore that in
a group of people that might be exploring that question themselves.
A lot of this comes, I think, from the sense
that some platforms have had an anti conservative bias. There's
been a lot of dispute about whether that's true and
why you did a study that said, in a general sense,
(10:48):
that doesn't seem to be true, but there's a lot
of specific examples that make people think that's true. Facebook
and then Twitter suppressing the New York Post story about
Hunter Biden's laptop because they thought was misinformation it turned
out to be true, banning Donald Trump on a lot
of platforms after January sixth, Maybe even things along the
lines of people being suspicious that TikTok is sort of
(11:11):
promoting pro Palestinian speech in a way that's non neutral
or that's somewhat intentional, rather than ask whether this is
happening or whether there is bias, whether it's about viewpoint
or about in other contexts sex and race. I want
to just assume it is and then think about how
(11:31):
we should think about it.
Speaker 5 (11:33):
So first I want to.
Speaker 4 (11:34):
Kind of break out different reasons that bias can happen.
Speaker 5 (11:39):
I think there's three. They're salient.
Speaker 4 (11:41):
One is a private reason that a platform, because of
its own preferences or even social pressure that's private, choose
to alter what an algorithm shows for these intentional reasons.
Speaker 2 (11:57):
You know.
Speaker 4 (11:57):
An example might be I don't know what your experience
has been, but since Elon Musk bought Twitter slash x,
I've noticed a lot more conservative leaning content than before.
That's probably intentional if that's true. Second, I almost want
to say unconscious bias. But what I mean here is
the algorithm is just doing something that it's learned, but
it wasn't something that a designer intended it to do.
(12:21):
An example that I think is really interesting in the
context of race and gender bias is the FTC just concluded.
Speaker 5 (12:28):
A dispute with right AID where right AD agreed.
Speaker 4 (12:31):
Not to use facial recognition technology for five years they
had been using it to try to identify known shoplifters
in their stores. Not only did that algorithm often make mistakes,
it also disproportionally made mistakes identifying women and people of
color as being more likely a match for a known
shoplifter when they weren't that person, right, So you know,
(12:53):
that also added to the eh, maybe you should slow
your role there. That's certainly not something that anyone at
right aid intended to be the case. They want the
algorithm to work, and they probably want it to work,
you know, equally for all people. And third is when
there's an alteration of the an algorithm because of government pressure.
And I think we can see examples of that both
(13:14):
in Mark Zuckerberg's letter to the Judiciary Committee about feeling
pressure from the Biden administration in terms of COVID nineteen
information or misinformation in the pandemic, and just a couple
of days ago. A lot there's a lot of a
lot of the talk is that Mark Zuckerberg's decision that
Facebook would alter its moderation policies, and also whether the
(13:36):
algorithm kind of puts up or down political speech being
preemptively too in response to what they expect how the
Trump administration might want to react to them. So presumably
when we're talking about should the government regulate algorithms, We're
probably talking about the first two situations only because in
the third situation, the cure is the poison or the
(13:57):
poison is the cure. So I want to say I
do have some sympathy for regulation where an algorithm is
acting directly on a person from a sort of procedural
due process point of view. Right, So there's no explanation,
usually at least for now, from for why black box
algorithms make a decision, and not a good way to interrogate.
Speaker 5 (14:20):
Like, well, why did you come to that conclusion?
Speaker 4 (14:23):
There's a and in those cases for these sorts of
fairness and accountability reasons. I think there's good reasons to
say that Europe might have gotten this right in GDPR,
where they have a provision that says that a person
shouldn't be subject to a decision based solely on automated
processing for a legally significant decision. An example that I
(14:45):
think is quite interesting is some public schools have been
using a private company's algorithm to try to assess how
much a teacher's presence in the classroom has improved that
the performance of the students compared to some baseline And
there was a case in Texas where someone got fired
or they attempt it was attempted to fire someone where
(15:07):
all the sort of human assessments of his performance had
been positive. It ended up he sued. The case ended
up being dropped. But the argument was, you can't explain
to me what was wrong with my teaching because we
can't interrogate why the algorithm is saying that I'm inadequate.
So in those kinds of cases, I think I have
(15:27):
some sympathy. On the other hand, when we're talking about
algorithms that are deciding what speech to show you, I
want to make the argument that biased social media is
still better than government imposed neutrality on social media for
a couple of reasons. One is we still don't really
know how to de bias things in the sense that
(15:48):
you can know that something is miscalibrated without knowing exactly
how to recalibrate it. What is the baseline. Amazon had
a an algorithm that was that sorted through resumes and
cvs that they scrapped, you know, to as like a
first cut. They scrapped using it because it learned to
never put a female candidate forward if there was any
(16:08):
indication the person was female from women's soccer team and
they're like, you know, we don't know exactly.
Speaker 5 (16:15):
You know, there's certainly different.
Speaker 4 (16:16):
It's not fifty to fifty women who study computer science,
et cetera, et cetera. We don't know what the right
number is, but we know it's that. We're pretty sure
it's not zero. But because you don't know what the
baseline should be, you can't really correct it. And so having,
you know, some sort of government effort to neutralize seems
very problematic. There's plenty of private incentives to fix these situations.
(16:37):
You know, half of social media users have you know
each you know, or probably on one political side or
the other. No one forced Amazon to scrap that algorithm
because they were like, oh, this is bad. So we
don't necessarily need government intervention to fix this, at least
in the medium run. We have the long standing worry
that even if private entities make bad mistakes, do you
(16:59):
really want to trust the government to fix them, particularly
when the people in power might be people you disagree
with in a few years. And finally, the epistemic humility argument,
which is, you know, when in a private context, it's
easier to switch who you are working with, who you
are speaking with. These entities are not as entrenched as
(17:21):
sometimes they seem to be. And I think once you're
talking about imposing a particular behavior top down from a
government perspective, you need to be really confident that you're
doing it right. And everything I just said about, you know,
the difficulty of figuring out what neutral is gives a
lot of you know, creates a lot of suspicion that
this isn't a good idea. You might say, well, listen,
(17:43):
why don't we just you know, take a really thick
version of libertarianism impose First Amendment values kind of on
private entities.
Speaker 5 (17:51):
Surely that would work.
Speaker 4 (17:53):
But I want to sort of share one anecdote that
maybe illustrates why that may not be a great idea,
at least in an Internet context where you don't have
robust social norms necessarily affecting how people speak.
Speaker 5 (18:06):
So the journalist el Reeve just.
Speaker 4 (18:08):
Published a book called Blackpilled that's talking about the history
of the alt right to QAnon, and one of the
people she interviews and talks to a lot is the
the founder and former moderator of the website eight chan,
which was designed to be you know, the total free
(18:29):
speech The only things we take down are things that
are illegal in the United States. And that person whose
name is Frederick Brennan, really believed that in the marketplace
of ideas idea that in mainstream publications people didn't allow
you to post things outside the Overton window. What we
really need is that like robust discussion and the best
ideas are going to.
Speaker 5 (18:49):
Float to the top.
Speaker 4 (18:50):
And over the course of years he became really disenchanted
with this because what happened on eight chan is that
as more and more Nazis came in, people who didn't
have those views were so kind of horrified and offended
that they left. And what you ended up with on
a channel, what he himself became horrified by was this
white supremacist monoculture where the marketplace was not actually.
Speaker 5 (19:10):
Functioning because that was the only view present.
Speaker 4 (19:13):
So my upshot is that curation and moderation is often
necessary to facilitate the real exchange of ideas. Private entities
will make bad calls, but the exit costs are relatively
low to go somewhere else. And this isn't really inconsistent
with free speech values because freedom of association also drives
the advancement of ideas. So anyway, that's the pitch. The
(19:35):
best speech environments come not from government imposed neutrality, but
from using the government's own neutrality to allow many different
conversations to exist with different constraints and different points of views.
Speaker 6 (19:48):
Okay, please, yes, so thank you. Christina Mulligan, Professor Mulligan.
She started by asking, you know, suppose you're a libertarian,
what do you do about technology changing? And I wouldn't
have thought it was possible, But I want to zoom
out a little bit even beyond that question and think
(20:10):
about the idea of technological change and what the law
does in response to that in general. And the first
thing I want to note is this very natural human
tendency in response to a new technology development to start
the lawmaking process. It's really impossible to ignore this impulse.
(20:30):
This is where your gut goes unless your mind tells
you otherwise. And you see this in history with famously
the printing press. Boy, was that a big technology. And
you see the Western world kind of respond with restrictions
on publications that question royal or papal authority. In the
(20:51):
Ottoman Empire, even a stronger response, you see bans on
Arabic character printing presses in the fifteenth and sixteenth centuries,
and then even bans on printing religious texts up until
the early nineteenth century, so very significant responses to technology.
More recently, I loved this quote. I couldn't resist sharing
(21:11):
it with you. But in response to magazines, this was
an eighteen ninety nine newspaper article. A journalist with the
Brooklyn Eagle warned that there would be a quote brain
rot contagion caused by widespread circulation of magazines millions of
American boys and girls. He said, we'll think like birds
(21:32):
and be unable to learn or to concentrate their minds
on anything. And the last example elevators. You think of
how scary an elevator was, they can be in fact dangerous.
But the very quick response was a pretty significant regulation
about the speed limit of elevators. Elevators were initially limited
(21:54):
to eight miles per hour. That was deemed so fast
that any faster would be dangerous, requiring attendance and things
like that. And I don't say this to act as
if new technology doesn't require rethinking of the law. I
think it does. But very typically there's not a need
(22:17):
for substantive change in the law. And in saying this
and describing what I mean. I want to pick up
on some of what Professor Vollack had to say about
LMS and defamation law. One response, a common response is well,
we need new laws about this. We've got LLLMS now.
But I took him to be saying, and he can
clarify that, well, we don't need new laws necessarily, but
(22:39):
we do need to figure out how lllms fit into
the law of defamation. And so I want to make
that distinction between figuring out how technology fits into the
law versus making new laws. Other very good examples of this.
Think about conversion or fraudulent misrepresentation. I was just teaching
(23:00):
that this morning in my Unfair competition class. These don't
require changes in the law. I've been writing about this
with online fraud. What's wrong about fraud is that you've
usurped someone else's independent agency, their ability to decide whether
or not they want to enter into a property exchange
with you.
Speaker 2 (23:18):
And they could do that at the end of a sword.
Speaker 6 (23:20):
They could point a sword at you and say I
want your stuff, or they could point a three D
printed gun at you and say give me your stuff,
or I'll shoot you with this ghost gun. The analysis
is the same regardless, and I take that to be
what Professor Vallick was at least implying with his analysis.
And the danger is when you have the elevator situation,
(23:45):
or you have the magazine and bird brain situation, that
the new technology becomes an occasion to unthinkingly open up
for debate what have been well settled principles. And by
that I don't mean very settled, I mean appropriately settled principles,
and they might get reopened really without consideration. You might
(24:06):
start wondering whether we need new fraud laws because of
online fraud, and inadvertently upset something that was actually working
quite well. And so there's a real danger there. And
I've noticed this, as I said, most recently with so
called dark patterns online, and I've written a few pieces
(24:27):
arguing that we might need to change enforcement techniques, but
we probably don't need new law to deal with dark patterns.
And the reason is it has been illegal forever basically
to lie to someone to trick them into giving your money,
and it's still illegal if you do it with a
deep fake. It's the same law. We need, no legal principle.
(24:49):
Some of my favorite examples from this area. I couldn't
resist sharing a deep fake of Kim Kardashian endorsing a
Colon cleanser, the Kim Kardashian coalon Klean. Is there a
deep fake case, no need for new law. But this
is certainly a new area of law that we should
pay attention to, or a new area where we need
(25:10):
to figure out how the law of fraud applies to
deep fakes. And one response folks might wonder, well, what's
the harm.
Speaker 2 (25:19):
If deep fakes.
Speaker 6 (25:21):
Contributing to fraud are already unlawful, why not make them
doubly unlawful with a.
Speaker 2 (25:28):
Special deep fake law.
Speaker 6 (25:30):
And if that were what we would actually get, I
would have no problem with it. But to borrow the
quote from Game of Thrones, the legislative and regulatory processes
are dark and full of terrors. And I want to
share some of those terrors with you that keep me
awake at night. And one of them deals with the
(25:51):
knowledge of regulators and legislators. Just think about how little
we know. And so this has been in the paper recently.
Is moderate drinking good for you, bad for you, or
nothing or neither? We think about the evidence we.
Speaker 2 (26:09):
Have of this.
Speaker 6 (26:10):
You could read as old a book as you can find,
and there is anecdotal discussion of over consumption of alcohol
not being good for you, and the suggestion that moderate
consumption might be good for you. So we have anecdotal
evidence essentially as old as human history. We have very
good data from the last century looking you know, at
least a good econometric data. We don't have a double blind,
(26:33):
placebo controlled trials and all that kind of stuff, but
we have very good econometric data.
Speaker 7 (26:38):
And we still don't know.
Speaker 6 (26:40):
Whether moderate drinking is good for us or bad for us.
And this is a question as old as human history
that has been interesting to humans forever. And yet now
we're thinking, well, do we need deep fake laws to
restrict and require registration of companies before they apply technology
that can make deep fakes? Well over the next thousand years,
(27:04):
is that going to be a good thing or bad
thing for humanity? We think we're going to know that
when we don't even know in retrospect with twenty twenty
hindsight whether over the last century alcohol has been good
or bad for us. And so caution, I guess is
where I'm going with that over the knowledge problem. Famously,
you can worry about interest group pressure, and so you're
(27:25):
not going to get that good law that all of
us could sit down and write something that had a
plausible claim to being a good law. That's very likely
not what you're going to get. And so your option
is keep what you have or get a law written
by Google, and that is not likely to be a
good one, at least for competitors to Google. And then
the last kind of concern I have a long list
(27:48):
of worries, but these are the three that I'm going
to talk about now, is the idea of kind of
risk averse implementation. And so you get an age agency
that has incentives that are different from the public. The
agency writing the statute, most of them are worried about
their jobs, and so if they're going to do anything,
they're going to overregulate. The compliance team at the at
(28:10):
the company that's that's deciding how to comply with the
law will do the same thing. Their compliance team is
going to say, well, I don't want to lose my job,
and so amp it up a little bit. And all
of a sudden, a well written law that was had
was this burden some becomes this burden some in the
in the implementation by the actual companies, and so these
are my worries, and so before deciding on positive law,
(28:35):
I would wrestle with them, is the main point?
Speaker 2 (28:39):
All right? Thank you? True? Yeah.
Speaker 8 (28:42):
So, as the other panels have discussed, ALGRAM, regulation is
a very complex, nuance and multifacet issue. So taking this apart,
I want to address three perspectives I think are very
critical to understand this issue in detail. So first, in
a room of lawyers, I think it is very very
important to understand the economic incentives at play here.
Speaker 7 (29:03):
So one very common call.
Speaker 8 (29:05):
For AI regulation are calls for transparency and accountability, which
are often taubted as to save all metric to save
things like outb discrimination, copperate infringement, etc. However, many of
these suggestions actually overlook the very powerful profitable and economic
incentives behind the AI industry.
Speaker 7 (29:24):
So some numbers here.
Speaker 8 (29:25):
Currently Open AI, which is probably the most visible well
known AI company, is estimated being valued at one hundred
and fifty seven billion dollars. At this price, its estimate
of value is higher than the GDP of several states
including Maine, Montane, North Dakota, and multiple countries including Ecuador,
Sri Lanka, and Morocco in twenty twenty four. This is
(29:47):
a banner year for investment in AI companies generally. For example,
there is over one hundred billion dollars in AI investments
and several billion dollar fundraising.
Speaker 7 (29:55):
For specific companies. So what does this economic growth mean?
Speaker 8 (29:58):
Obviously, AI is here to stay, and despite what many think,
AI is likely not a fad like some other technologies recently. However,
despite these economic growths, AI transparency is still sorely lacking.
For example, a recent Stanford study found on multiple models
that the score was generally fifty eight out of one hundred,
with the researcher stating that the overall state of transparency
(30:21):
in the foundation model was poor.
Speaker 7 (30:23):
So what this is means?
Speaker 8 (30:24):
Does that mean that there's a systematic issue across all
AI models. I don't believe. So instead, I argue and
I believe that for these companies, transparency is sorry. Lack
of transparency is not a bug, but it is a feature.
For many of these AI companies, a large amount of
their valuation and their economic value is the fact that
(30:45):
their companies are black box. They want to have proprietary technology, databases,
and algorithms. To that end, increased transparency only leads to
increased financial, legal and competitive risk.
Speaker 7 (30:57):
I think as lawyers.
Speaker 8 (30:59):
And legislators we have to actually understand these economic incentives
before touting new laws.
Speaker 7 (31:05):
I also want to make two new points here.
Speaker 8 (31:07):
One point that the focus of AI models as a
massive profit center is actually coming under fire. For example,
Elon Musk is currently involved in a contentious litigation with
open Ai given its push to restructure as a for
profit entity. Although this lawsuit is itself a complicated issue,
it exemplifies the issues around economic incentives in the AI
space generally. In addition, there has been a large push
(31:30):
for open source models and technologies. For example, Meta has
made it a priority to create open source AI technology,
most notably by.
Speaker 7 (31:37):
Creating its Lama language model.
Speaker 8 (31:39):
However, I still urge that even these open source models
are extensive risks. For example, open source technologies can make
their make up their loss revenue through using other less
favorable practices such as invasive advertising, locking users into a
developer's ecosystem, and selling user data. So what does this
all mean? Ultimately?
Speaker 7 (31:59):
I believe that this means that we have to always.
Speaker 8 (32:01):
Remember that AI industries and algorithms all exist within a
much larger, much more complex, and much more dynamic and
profitable ecosystem than law school classrooms, and it's very important
that lawyers and legislators understand this ecosystem before proposing wide
sweeping solutions or reform. As a second perspective, I believe
there are valuable lessons to learn from calls to address
(32:23):
algorithmic regulation other spaces, including the IP space. For example,
there are now thirty eight ongoing copyright lawsuits relate to AI.
Speaker 7 (32:31):
That have been filed in the United States.
Speaker 8 (32:33):
Among the most prominent of these are lawsuits against large
language models for utilizing.
Speaker 7 (32:37):
Copyright and material.
Speaker 8 (32:39):
As Professor Dickinson said, I think these lawsuits raise an
important principle, which is does the current law actually work.
Speaker 7 (32:46):
On this point?
Speaker 8 (32:47):
There has been a legislative explosion around calls for AI bills.
Speaker 7 (32:50):
Every other week.
Speaker 8 (32:51):
There is now a new AI bill to address something
going on AI in elections, AI in d fakes, AI
and copyright protections, AI and co cleansers.
Speaker 7 (33:02):
It seems like.
Speaker 8 (33:02):
There's no end to just endless AI bills in the
IP space. Specifically, the bills have reached new levels. Many
of these bills, as pressed Dixons have talked about deep
fake laws, et cetera actually usurp ongoing principles in these
bodies of law. For example, a proposed fake no Fakes
Acts essentially creates a federal white publicity that would seemingly
(33:25):
capture any realistic representation an individual. Make this our licenble
right and a post mortem right. Each aspect of this
is a highly disputed element within the right of publicity, scholarship,
and other examples like California's recent SB nine four to two.
All AI systems are expected to watermark any AI generated content.
(33:45):
It is good to be skeptical around increase in legislation.
Our existing systems of law can handle these issues, and
to the extent they can't, it is important we take
a very steady hand. I would urge those interested in
the algorithmic regulation space to take a very similar price coach. Finally,
I believe it's important that we understand the challenges that
(34:05):
come with the term algorithmic regulation. I think this is
especially concerning when it comes to issues around bias and discrimination.
For example, especially with discrimination, I think in some situations
this can be very clear cut. For example, if there
was a mortgage evaluation system that used AI to systematically
builize South Asian applicants. I believe we could clearly say
(34:26):
this is likely an example of algormic discrimination. However, I
think the issue gets much more complicated in the details.
For example, Google was recently accused of having a biased
AI model. After trying to address this issue, they create
an AI system that would output photos of the Founding
Fathers and Nazis as people of color. I don't believe
(34:47):
any of us believe that this is the good outcome
or this is the kind of bias adjustment we want
large tech companies engaging in. This is also an example
where right now Google is showing its cards by launching
a missile to kill a mouse. But what happens when
these weights and balances are adjusted ever so slightly so
Now all outputs about the Founding Fathers include lengthy disclaimers
(35:07):
about their racist acts, preferences, and beliefs. Once again, I
believe conversation of regulation of algorithms raised similar questions that
we see and in the tech space. At its heart,
raising concerns over bias also raises related questions of who
is the purveyor of truth in our society? Do we
want to put the burden on large tech companies like
X or Meta, what happens like what happened this week
(35:29):
with Meta when these companies are subject to censorship pressures
from government actors, or if these platforms take content moderation
policies that we disagree with in these circumstances, is algorithm
and regulation an actual tool or just a tool for censorship?
On my end, and call me old fashioned, I believe
that one core ask that we only talk about is
(35:50):
consumer education. Banning certain technologies, whether it be a social
media app or a GENAI tool, is an extreme measure,
and one does one that does not get to the
underlying part of the matter. For all of this to work,
the fellow society, our institution, and our democracy as well,
we need individuals and consumers who can actually understand these
issues in depth and think critically. That means we only
(36:12):
need to learn how AI actually works, how these tools
actually shape their output, and then question those outputs when
they lead to biased or truly wrong outcomes. So, in conclusion,
I think this issue is a very tricky one, and
as lawyers and policymakers, we need to create solutions that
are quick, fast and effective. However, given the promise and
importance of AI to our country. We need to also
(36:34):
make sure these approaches are measured and reasonable.
Speaker 7 (36:37):
Thanks.
Speaker 2 (36:39):
Okay, thank you, So let me start.
Speaker 1 (36:43):
I have many questions for each of you, but I'll
begin that Eugene, when you were talking about the regulation
through the tort system, for example, of the blame being
laid with the man the people who design these the software.
If we start from the view that we're regulating in
(37:03):
order to nudge or even force the designers of softwares
to internalize the costs that they are imposing through their
shoddy design, then the case of direct human changes after
the fact, I think it's pretty straightforward. But if the
algorithm behaves in unexpected ways and we're still trying to
improve the design, there seems to me a really tough
(37:27):
empirical and technological question about how certain we can be
designers of the legal system, how sure we can be
that the designers of algorithms weren't already doing the best
they could, Because if they were, then what we've generated
is just a general wealth transfer or a general insurance statue.
What we want is some indication that, Okay, you didn't
(37:49):
know what you were doing. You didn't know what the
algorithm was going to do, but you at least knew
the extent of your ignorance, and that might have led
you to make different design choices. And I'm sort of
rehashing the classic you know, foreseeability analysis of right of
one ale torch. Maybe, but what's your sense of right?
Speaker 2 (38:05):
So I think this is a tremendously important question.
Speaker 3 (38:07):
I think it would be, for example, regard to liable law,
except at least so far, there do seem to be
only two cases we know of. So maybe in terms
of either positive or negative deterrent effects, either good or
bad ones, maybe there aren't as great as we might imagine.
But you could certainly imagine situations where undue liability for
(38:30):
platforms might stifle innovation.
Speaker 2 (38:33):
For example.
Speaker 3 (38:34):
So there are a couple of ways of thinking about it.
One possibility that some people have offered, as to many
products in the past, is to just impose strict liability,
not on a culpability basis. Not we're holding you liable
because we think you're careless, even we're just holding you
liable because we think that will in the aggregate give
(38:57):
you the best incentives and also related will avoid having
to having the system as a whole blow billions of
dollars in hiring dueling experts.
Speaker 2 (39:08):
Let's say, so, let's just give you an example of something.
Speaker 3 (39:10):
I don't think this would work well for large language models,
for reasons I'll get to in a moment. But imagine
we had a rule for self driving cars, which is,
whenever there's an accident involving a self driving car, and
let's say a pedestrian, let's just make it. Make it clear,
make it clear, make it narrow. So we can have
a very simple rule. The self driving car company is
(39:33):
going to be liable. Now, there are possible downsides. There
may encourage pedestrians to be more careless, but since you know,
the highest payout isn't going to get you your life
back or your limbs back, probably there already are sufficient
incentives for care even though, of course we know pedestrians
off and are careless, and we might say this will
(39:56):
this will encourage the companies to be more careful. Maybe
we might say there were no punitive damages. So if
saving one life, let's say that estue in life will
cost one hundred million dollars, we might say, no, you know,
we want you to do this kind of cold calculation,
(40:17):
although of course sometimes we know juries.
Speaker 2 (40:19):
Don't like it when you do that. Maybe we can have.
Speaker 3 (40:20):
A statutory rule that says essentially that there will be
no punitive damages, even compensitory damages, according to some schedule,
and the theory being that ultimately we pay for those
injuries in one way or another. We may pay for
through the insurance system or whatever else. And the cheaper way,
because it avoids costly litigation, is just to have the companies.
Speaker 2 (40:43):
Pay for it and then spread the burden.
Speaker 3 (40:46):
And given that it appears that self driving cars are
going to be unbalanced much safer than humans, we set
the bar very low we humans do. I just I
don't think this is going to be kind of a
crushing burden. On the other hand, if you do hold
a social media excuse me, an LLM liable for all
(41:08):
possible harms as a result of its output, which you
really are saying is you're not holding it liable just
for physical harms. You're holding a liable for economic harms,
maybe even emotional harms. Certainly, libel is often a combination.
The reputational harms are a combination of that, and there's
actually maybe a good reason why we usually do not
have strict liability or even negligence liability for pure economic
(41:31):
harm and such. Maybe that would be two burdensome. But
if that's so, to the extent, we do accept negligence
because libel is an exception from the pure economic harm rule.
Speaker 2 (41:40):
You can have negligence liability in libel cases.
Speaker 3 (41:43):
I think there are well understood mechanisms, however, imperfect from
design defect law and medical malpractice law and such, where
we say, you know, we do try to figure out
what's careful and what's not.
Speaker 2 (41:54):
One of the things maybe the look to see what
other platforms do or what other products do.
Speaker 3 (41:57):
Maybe if other products do better than yours, that's a
good sign that you're that you're being careless. So those
are a couple of ways of thinking.
Speaker 2 (42:05):
About it, Okay, Christina.
Speaker 1 (42:08):
So you're advocating a posture of government neutrality accompanied by
the sort of private platforms exercising the power to curate
and moderate, and you know you will leave one and
go to the other, and the.
Speaker 2 (42:22):
Cost of exit is low.
Speaker 1 (42:23):
I'm with you up to that point, and it seems
to me that you run up pretty quickly at that
point against the problem of network effects because as these
platforms you know, are are rising and falling. I'm old
enough to be on to have been on MySpace and
high five and friends to before Facebook and Twitter and Snapchat.
Once these platforms are rising and falling, you know, there's
(42:45):
got to be some question are they going to converge
to some equilibrium at which either the companies that remain
will strengthen their user bases and expand or merge and
you know, sort of God forbid we bring the anti
trustes them into this. What do we do with the
problem of network effects leading to greater concentrations of power
(43:06):
that may be entrenched in a way that existing platforms
today are not yet entrenched.
Speaker 4 (43:12):
Yeah, and that's the that's the worry in the sense
that maybe we're almost there is I think what motivates
a lot of people to right now be like, are
we there?
Speaker 5 (43:20):
Should we do something?
Speaker 4 (43:23):
If there are certain you know, a few main places
that people go to sort of see what people are
talking about politically, and I think the burden has to
be on kind of the pro regulatory side to show
that in the you know, that that is what is happening.
And just to give an example that may not have
affected people in this crowd, but like you know, a
(43:45):
lot of people more left leaning were really unhappy with
what Elon Musk has done with.
Speaker 5 (43:50):
Twitter slash x. And it took a couple of years,
but it really seems like an alternative platform.
Speaker 4 (43:57):
Blue Sky has, like you know, there were like seven
alternate Twitters to start that one of them has seemed
to get enough momentum to really pull the people that
were disenchanted with Twitter x over there and to create
a new place that people want to go and are
happy with despite the network effects issues, particularly like specifically
(44:23):
because people were unhappy with what the platform was doing.
And yes, that's not you know, you can say two
weeks after Elon Musk bought Twitter, you would have been like, oh,
there's nothing, it's not gonna happen, and even like a
year after. But so a lot of these things aren't tomorrow,
things where you know, you just snap your fingers and
you know it's it's trivia.
Speaker 5 (44:41):
It's completely trivial.
Speaker 4 (44:43):
But I think the evidence, as you mentioned, you know,
my Space and.
Speaker 5 (44:46):
All the all the old you know friends her I remember,
we're friends to her.
Speaker 1 (44:53):
Data, by the way, still exists somewhere I think somebody's server, right,
like those GeoCities pages that I created in college.
Speaker 4 (45:00):
Sure, but you know, until it really seems like that's
that people aren't able to say, I'm unhappy here, let's
do something else. I think we've got to, you know,
continue to assume that exitibility and change.
Speaker 5 (45:14):
Is possible, because it certainly seems like it is.
Speaker 4 (45:16):
And maybe, you know, you have to be comfortable with
there being a medium term in that, because sometimes it
does take a couple of years. But I think on balance,
we're not in a situation where things are so entrenched
that we want to think monopolistic like that, this is
a monopolistic situation.
Speaker 1 (45:31):
Okay, And then Greg and Threw, I want to pose
the same question to both of you because I think
there's an interesting through line between the set of remarks
you both offered, and that's the choice that we're continually
confronted with when we try to liken problems we see
today to problems we've seen in the past, and sort
of you know, there's there's always a range of problems
(45:55):
that come from drawing the analogy to finally or not
finally enough, Greg, you take a posture that the you know,
the devil you know, beats the devil you don't, and
we like the devil we've got, and the devil we're
talking about at the moment is Google. If they write
the law, then we're going to be in a worse
situation than than the alternative. Okay, the alternative, in the
(46:16):
absence of statutory or regulatory reform or just change in
the manner that you describe, might be simply that, well, look,
there's an autonomous vehicle, and it's going to be held
to some common law development in the courts through the
tort system. That's our backstop. And I was having a
(46:36):
discussion a couple of days ago about how the the
closest analog to an autonomous vehicle is a horse, because
it makes decisions, and there's plenty of you know, with
with an odd to judge easter, but there's plenty of
law of the horse. When we when we're trying to
figure out how a system whose predilections we can predict
(46:57):
but not perfectly, and whose behavior we can anticipate but
not perfectly, we've got common law to guide us, and
we may have to reach back further than we ordinarily do.
But in that situation, two things will be different than
if we had gone the political route, the rout of
the political branches. One is that the pace of legal
change will be slower. And maybe that's good and maybe
(47:18):
that's bad. And so that's one question I want to
put to both of you. Is it better for change
to come slowly for reasons of allowing innovation to sort
of proceed relatively unimpeded, And the identity of the legal
developer is different. Courts are not immune to capture, but
they are generally understood to be less susceptible than regulatory
(47:39):
agencies and elected officials and so forth. So is there
reason to favor caution for the reasons you're describing. Additionally,
for the reasons that change may come slower and we
think that's good, or in spite of wanting quick change
as suggested, in spite of that, we might want it
(48:02):
because the identity of the legal developer is a judge
rather than a capturable regulator or elected official. But both
those questions to both of you, and.
Speaker 6 (48:11):
I love the horse analogy that it's really terrific. I
was skeptical at first, but I thought, well, you do
train the horse in ways that are unpredictable. You teach
the horse how to be a good horse. I don't
know what that means, but yeah, I think it's a
pretty solid analogy, and as a common lawyer, I really
like it, and I think I think there is a
(48:33):
reason to favor the common law. So I don't want
to come across as the person who is resisting change.
Maybe this is true of me, but I don't want
to come across this way that I'm resisting change because
I'm old fashioned and scared of change. I actually think,
at least I've convinced myself that there are good reasons
to retain the existing legal structures. And some of those
(48:56):
are an accident of history, some of those are kind
of design. And I guess one reason is I think
we have a good common law. We have a common
law that prioritizes liberty, that prioritizes property rights, and whatever
you think of that, it's turned out empirically to be
a pretty good scheme for developing countries to become richer.
(49:18):
And so we have a good foundation by a variety
of metrics, and when we build statutes on top of that,
to the extent they do anything, they're invariably changing what
is what we have inherited as a pretty good system.
But I also think there are reasons to favor exactly
the things that you mentioned, the slower pace of change
(49:39):
and the identity of the decision makers. I would actually
push back, though, and say, I don't think the pace
of change is slower.
Speaker 5 (49:46):
It's actually really hard.
Speaker 6 (49:47):
To build consensus and get a statute passed. It's very
easy to file a lawsuit and get a negligence claim decided.
In fact, Professor Bollack mentioned a couple cases already involving LLMS.
We don't have a statute on that point yet, And
I think necessarily the change is slower in a different sense,
And I think this is what you're getting at. We
(50:08):
don't have an announcement from whichever court that was that
here below henceforth shall be the law of LMS regarding
that and the other thing. We have a decision that
in this context the libel claim or the defamation claim
succeeded or failed. And that sort of incremental lawmaking is
(50:30):
useful for exactly the reasons I mentioned about our lack
of knowledge. If you don't have perfect knowledge about the
welfare effects of a technology, which we certainly don't deciding
on the particular facts this particular case, one little tiny
step at a time can actually be a positive. And
the last point I'll make is courts are more resistant
(50:54):
to capture. In my view, it's harder for Google to
control a nationwide set of fifty states judges, many of whom,
at least in the federal system have life tenure and
don't rely on elections and donors and things, and so
I think there's a strength there too.
Speaker 8 (51:11):
I think, just adding on to that point, off the
uncertainty slash certainty of law, replacing large bodies of law
with new statute creates a lot of harm for other industries.
So for example, even supposed minor changes in the right
publicity or others lightless laws creates massive issues for creative
industries overnight. Sometimes I think we have to always remember
(51:35):
with artificial intelligence that And the main point I was
trying to really make earlier was it is pervasive and
will be pervasive across many industries, and creating these new
laws can create a lot of more confusion, a lot
more confusion that creates a lot more issues. Off Professor
Dickinson's the point, I actually agree as well that I
think these incremental case law does a better job of
(51:58):
actually handling specifically, So, for example, a lawsuit over digital
replica and pornographic deep fakes might get to specific issues
in those circumstances better than an overarching deep fake law.
I also agree with the identity of the legal developer.
I do agree that generally judges and the case gets
(52:19):
at it better because of things like regulatory capture. So
for example, in California, they just passed a new digital
replica law where they actually added a carve out saying
your record company can actually enforce your likeness rights.
Speaker 7 (52:30):
That is just pure capture right there.
Speaker 8 (52:32):
There's no reason about they occur except that someone lobbied
them to do that. I would also say that once
again throughout all these discussions, what we really need to
also focus on is consumer education. It is so unbelievaly
important that people actually know what they're using. It is
very easy for us to continuously displace responsibility to legislators
and lawyers, many of whom don't know what AI is,
(52:55):
to be completely frank with you, and don't know how
large language models even work.
Speaker 7 (52:58):
I don't really trust.
Speaker 8 (52:59):
Anyone in Congress to tell me how open AYE works
on I'm not sure if.
Speaker 7 (53:03):
Opening I know if opening I works at this point.
Speaker 8 (53:05):
So I just really think it's important, especially as a
younger legal scholar, that people actually really understand the models,
how they work, and think critically about them. So when
we're having these very large conversations about how does fair
use work with AI, what we really need to talk
about is what is actually occurring with the Ingestice Cooper
(53:25):
material in all these different circumstances, and then think intelligently
about each of them. So generally to your points, I
do agree, Yes, I think slow change is better, and
I do think having the law developer being not a
central body but almost like a Hyikian point of view,
right disperse knowledge that does work better than the current
(53:46):
proposals being put.
Speaker 3 (53:47):
Fourth, if I could just chime in a bit, I
just want to express some doubt about education.
Speaker 2 (53:55):
We're in the education.
Speaker 3 (53:56):
Business, but let's just say that education has limits, and
one way of thinking about it is how the common law,
which I do think is a useful way, useful tool
for thinking about it, precisely for the reasons that you
gave that it's an attempt to reconcile liberty and property
rights and safety it's evolved over time through people who
(54:19):
thought one.
Speaker 2 (54:20):
Of the things thought about human beings as they actually operate.
And one thing we see is that the law does.
Speaker 3 (54:25):
Often turn on reactions of a reasonable person. So sometimes
it dismisses the possibility that some unreasonable person may misinterpret
some statement. If I say, well, here's a statement about
me that's libelous, and the court may say no, no,
on its face, it doesn't seem defamatory, in fact, seems accurate. Well,
(54:46):
what if somebody will misunderstand it in some other way?
In fact, I've got evidence that a few people have
done that. I think the court's answer is, you know,
we can't take into account every foolish misinterpretation out there
comes up often parody cases. By the way that somebody
the court says this is a parody. Somebody says, well,
I have evidence that a couple of people have thought
(55:07):
it's not a parody.
Speaker 2 (55:08):
They thought it was real about me.
Speaker 3 (55:09):
Well, too bad for them, Too bad for you though,
too right, because you, as the plaintiff, can't recover for
real damage to your reputation.
Speaker 2 (55:18):
And again you might say, well, you.
Speaker 3 (55:20):
Know, people should be more careful and understanding parity on
the other hand, let's say that somebody says about me,
rumor has it that Volok was caught with his hands
in a petty cash.
Speaker 2 (55:36):
That's why he really left UCLA, And I can sue
over that.
Speaker 3 (55:44):
But somebody might say, well, but it's unreasonable for people
to believe rumor. Instead of having liability for passing along rumors,
we should just educate people so that they are better
at not trusting rumors. One possible answer is return to
what you were saying about alcohol and how it's been a.
Speaker 2 (56:02):
Long standing problem.
Speaker 3 (56:04):
You know, we've been worried about people believing rumors for
probably as long as there's been human language, and the
problem doesn't seem to be going away. But more broadly,
more broadly, the law says that I don't have to
wait until the world is educated not to believe the
rumors and in the meantime have no recovery right. I
(56:24):
am entitled to recover even though people it's in part
because people are being kind of not as careful, not
as critical thinkers consumers as they would be. Another thing
that the law, both statutory and common law I think
of trademarked does as it recognizes that in certain context
while a really educated consumer would say, oh, you know,
(56:46):
I do see this restaurant has things that look like
this and then kind of a goldenish color and it
calls itself McDonald's Mac.
Speaker 2 (56:56):
Yeah.
Speaker 7 (56:57):
But being a.
Speaker 3 (56:58):
Really educated consumer, I realize the very fact.
Speaker 2 (57:02):
That says m AC means that it's not the real thing.
But no, the law recognizes that we're not.
Speaker 3 (57:09):
Going to be constantly critical thinking about every single sign
that we drive by and making spot decisions.
Speaker 2 (57:17):
Of where we're going to buy a burger.
Speaker 3 (57:19):
So I just want to signal a note of caution
about saying, well, the solution is not regulation.
Speaker 2 (57:25):
Or perhaps liability, but more educated consumers.
Speaker 3 (57:28):
We could wait a very long time until consumers become
more educated. And even if they can become more educated
about this technology, they may not about other things, partly
because some consumers are not very smart, but partly because
they are very smart and they know they have got
better things to spend their time on than educating themselves
about every new technology. So some of the time the
law has to take into account human failibility, reasonable human's failibility,
(57:51):
but nonetheless not try to estimate things based on a
perfectly educated person.
Speaker 1 (58:00):
Okay, well, so we're already in the back and forth,
and of course we want you all to be part
of that discussion as well, so please use the microphones
and we welcome your questions and reactions from the panel.
Hot takes if you've got them.
Speaker 2 (58:16):
You many flors, please.
Speaker 8 (58:23):
I was interested in that last exchange, and I wonder
what our other two panelists have to.
Speaker 2 (58:28):
Say about it. About the educational point.
Speaker 4 (58:34):
I mean, and so what my mind went to, which
isn't directly responsive, but if we're just riffing, was sometimes
we also say humans are not going to educate themselves well,
and we're just too worried about the government mucking it up.
Speaker 5 (58:49):
Because what I was.
Speaker 4 (58:49):
Initially thinking of in terms of education was in terms
of online misinformation. Before I think it was the twenty
sixteen election, the most shared news story on Facebook was
that Pope friends this head endorsed Donald Trump, which was
not true. But you know, on the margins, do these
you know, be like oh wow, well, you know, and
(59:10):
it does seem like people like it wasn't just being
shared because people were like, oh that's funny, right. You know,
many people do believe stories like that because it is
you know, takes work to interrogate like, well, where does
that come from?
Speaker 7 (59:22):
Them?
Speaker 5 (59:22):
Me Goo, do other news stories? Do other news stories
say that?
Speaker 4 (59:25):
Da da da da da? And we tend to respond
to that by saying, there's never going to be a
point where people are, you know, all being critical and
like finding multiple sources and cross referencing. But we say,
we're more concerned that if the government starts trying to
(59:46):
get in there and saying what's true and what's not,
that that's just going to lead in ugly directions. Maybe
not in that specific case, but if you kind of
open the door, lots of lots of bad choices will
come through. So that's just sort of a third perspective,
which is neither of the two that you put up,
that sometimes really people are going to be We're not
going to do anything even though people are going to
(01:00:06):
be unreasonable.
Speaker 6 (01:00:08):
So one thing I would note about that is I'm
not sure what professor's Mulligan and Voulick think of this,
but it may be that that feature of the law
and saying we're going to look at the reasonable hearer
rather than the actual hearers, there is real harm there,
and I think Professor Ballock noted this. It's not as
(01:00:29):
if you're not harmed because someone foolishly thought that you
were stealing cash, you're still harmed. They still think it,
even if they are foolish for doing so, and so
you have real harm. And so that makes me think
it's not a substantive bit of remedy law there that's
doing it, but kind of rather a procedural.
Speaker 2 (01:00:49):
Shortcut.
Speaker 6 (01:00:50):
We're saying, we don't want to think about the effect
on individual hearers, and so maybe we shouldn't read too
much into it if it is merely a procedural shortcut
rather than something that we think goes to the heart
of the law itself.
Speaker 9 (01:01:05):
Eugene raised the question of something like an AI libeling someone,
and I'm not sure why this is a new problem.
That is to say, we've always had the issue in
the law of humans taking actions for some uncertain effects.
So if I'm target shooting with my twenty two, one
(01:01:26):
is at least told that a twenty two bullet can
go a mile, which is maybe an exaggeration, but you
could certainly imagine a case where I'm target shooting and
I've got one chance in one hundred thousand that the
bullet will hit some random person. Why is that any
different from saying I write an AI it is answering
(01:01:46):
people's questions. I know there's one chance in one hundred
thousand that it will hallucinate, and say that Eugene Volk
has been convicted of murder.
Speaker 2 (01:01:57):
Why is that any different? Why don't you still.
Speaker 9 (01:01:59):
Say it's a straightforward, you know, towart issue. You maybe
have strict liability if you believe that the people who
are injured aren't doing much that would affect their injury.
Maybe you don't have a strict life.
Speaker 2 (01:02:12):
You have some other rule. Or we've got a standard
list of possible rules.
Speaker 9 (01:02:17):
We can have make both of them libel if you
like that, have a fine rather than a rather than
a tort damage. What is new about the the the
AI version of this problem.
Speaker 3 (01:02:28):
Well, I did write about sixty pages more or less
saying there isn't to write. We should be applying pretty
standard legal rules.
Speaker 2 (01:02:38):
So I agree with you. Let's just say not everybody does.
Speaker 3 (01:02:42):
Some people, for example, took the view that because large
language models are are known to hallucinate, that therefore the
reasonable viewers should recognize the analogies give magic eight ball
like or or a weija board that recognize that they're
(01:03:03):
untrustworthy and therefore shouldn't trust them, and therefore we're going
to take the view that the readers don't actually trust them.
I think that's wrong for various reasons, one of which
is we don't invest hundreds of billions of dollars in
wija boards, and we don't put the answers of weija
boards that they as the first response in Google search.
Speaker 2 (01:03:25):
Right.
Speaker 3 (01:03:26):
Occasionally there are interesting twists, like, for example, Libel law
rightly or wrongly, does require knowledge or recklessness as to
falsehood as to certain kinds of statements. Essentially, let's say
about public officials and public figures. To oversimplify, how do
you apply that to a large language model.
Speaker 2 (01:03:46):
One is you might say, well, we're just going to
impute knowledge, but that's the whole thing that Libel law
doesn't allow.
Speaker 3 (01:03:51):
Another thing is you might say, well, because the platforms,
surely nobody there had the thought of this, we're going
to output this state and in its false so they're
categorically immune as to the knowledge of recklessness claims, but
perhaps not as the negligence claims. A third possibility might
be you might say, well, we ask whether somebody has
(01:04:13):
alerted the company to this, and once they've alerted to it,
then the company is potentially liable, which I think is
probably the right answer. But that's really a question of
application of existing law to the fact. So on balance,
I agree with you. I' let's just say not everybody does.
Speaker 1 (01:04:26):
If we fight the hypo for just a moment, it
seems to me that if what we've got is the
designer of a large language model, the person who suffers
a reputational or other injury is the result of a
sort of tendentious prompt or something like this, then it's
not really target shooting as such. It's the gun manufacturer
that's in the position of the large language model designer,
shooter that's in the position of the prompter, the person
(01:04:49):
who gets shot who's in the position of the person
who suffers the reputational injury. So how would we apply
gun manufacturer liability.
Speaker 2 (01:04:58):
To right model?
Speaker 3 (01:05:00):
Gun manufacturers have specially immunity under the Professional Protection of
Lawful Commerce and Arms Act, So so there's just a statute.
And part of the reason that gun manufacturers are generally
immune for criminal misuse of their guns is the assumption
that there's virtually nothing the gun manufacturer can do stop
such misuse without at the same time sort of sharply
(01:05:23):
interfering with lawful use. I do think if gun manufacturers
had really smart guns, guns that could actually figure out,
like do AI analysis of what the what's downrange and such.
Speaker 2 (01:05:38):
If that were the case.
Speaker 3 (01:05:40):
Then I think there might be reason to say, well,
if you already know a lot about what's being shot,
then maybe you should be liable in some measure. So
one thing is the issue of knowledge. The other thing
that I just want to stress, just because this comes
up sometimes it's not a question of it just the
tendentious prompts.
Speaker 2 (01:05:56):
It's true that if.
Speaker 3 (01:05:57):
I post something that says make up something false about Christina.
Speaker 2 (01:06:06):
Let's say, not that I ever would.
Speaker 3 (01:06:09):
If it does do that, then it can't be liable
for that, and among the things we don't want it
to be liable for for fiction. It's actually a useful
application for it to write fiction, among other things, because
then I am not going to be deceived about this,
and therefore Christina's reputation won't suffer because I'm the one
who said, give me something false. Maybe if I then
(01:06:30):
redistribute it to other people, they'll be deceived, but that
I do think is on me. On the other hand,
if I say tell me something about Christina Mulligan, a
law professor, and they say, well, here are all the things.
She's actually in prison because she's convicted of levying war
against the United States, this is in fact the fact
(01:06:50):
pattern of one a particular case, though not Christina, but
somebody else. Then in that case, it's not a tendentious prompt.
I'm actually trying to figure out something. It's saying something
that's false. Unlike the gun manufacturer, presumably it's in a
position to do something about it, precisely because it's creating
those things. It's not just providing a standalone physical tool
(01:07:12):
that people use.
Speaker 2 (01:07:13):
It actually is composing that output.
Speaker 3 (01:07:16):
Maybe if while it's composing the output, it has some
sort of uh mechanisms to minimize the risk of heart
if I.
Speaker 5 (01:07:23):
Can add a little bit on.
Speaker 2 (01:07:24):
The well, since you're since you're part of the factor.
Speaker 4 (01:07:27):
No, but the the I do think one place where
the use of algorithms to do things does put pressure
on the law is whenever there's a men's ray thing.
Speaker 5 (01:07:37):
And it's not just in speech issues.
Speaker 4 (01:07:39):
So one you know, uh, well, well discussed in ai
Land is you know so much of discrimination laws about intent?
Speaker 5 (01:07:47):
Can an algorithm ever intend anything?
Speaker 7 (01:07:49):
You know?
Speaker 5 (01:07:50):
Like what does that mean? And even in really.
Speaker 4 (01:07:51):
Anodyne cases, if you there's a rule for sending you
can't send a d m C a takedown notice if
you knowingly materially represent that it's you know, infringing or not.
But a lot of them of content owners sort of
have algorithms sort of like search for things and send
the takedown notice automatically. And so again the question comes
(01:08:12):
up again, which is like can the algorithm ever materially
misrepresent that it owns a copyright?
Speaker 7 (01:08:17):
Like what does that?
Speaker 2 (01:08:17):
Like?
Speaker 4 (01:08:18):
How does it know is it you know, is it
an honest mistake or does it know that it's a mistake.
So this comes up all over the law. And so
when you send algorithms to do what people previously would do,
it just puts pressure on how to like understand those
elements of.
Speaker 2 (01:08:31):
Life, right, mental state? Absolutely? Please?
Speaker 10 (01:08:34):
Yeah, So I wanted to press on some of the
things about liability for physical injuries. So you're talking about
autonomous vehicles and maybe make sense to have strict liability.
I think there's a strong case for having strict liability
for human drivers when the hip pedestrians, but we don't, right,
and so you might be worried that creates a distortion
that you know, if avs are on average safer, right,
(01:08:55):
but you're going to slow the deployment to fusion of
that technology because you're holding them to a higher standard
that you're holding human drivers too. By contrast, you were saying,
maybe we want to make it clear that there's no
punitive damages, But if you're worried, as a lot of
people are, that you know future you know, AGENTICAI systems
present you know, extential or civilizational scale risks that we
couldn't use competitory damages to deter because we can't hold
(01:09:18):
you liable for you know, a five trillion dollar injury
right or you know, more extreme scenarios with the legal
SYSM is no longer functioning, and so maybe you need
pune of damage to get at those kinds of risks.
When we get some kind of small scale injury, that's
maybe indicative.
Speaker 2 (01:09:33):
That you talk about endurable risk.
Speaker 10 (01:09:35):
Maybe maybe puni of damages are important tool.
Speaker 2 (01:09:37):
For something like that.
Speaker 3 (01:09:39):
I'm not sure what used punitive damages would be. One
Skynet takes over, but I don't think Skyet would say
we're not going to take over.
Speaker 10 (01:09:49):
New damages in the sort of warning shotcase.
Speaker 3 (01:09:51):
Maybe maybe so, Maybe so I should say these are
all interesting issues, and I'm sorry being a little glib
with with the.
Speaker 2 (01:10:01):
Exact example, but.
Speaker 3 (01:10:03):
I just think these are lausible questions to ask more
or less on a case by case basis. So I
think here's one thing that might happen. Imagine some imagine
that there's so much fear about self driving cars, and
there doesn't.
Speaker 2 (01:10:20):
Seem to be so far.
Speaker 3 (01:10:22):
It looks in part because it's been kind of controlled
experiments that seem to have been pretty successful.
Speaker 2 (01:10:26):
I think people are pretty open to them.
Speaker 3 (01:10:28):
But imagine there's so much fear that there's a risk
of just a prohibition on that right. You can't have
a self driving car under existing law, as I think,
as I understand traffic law, I think traffic law says
you've got to have a driver at the wheel, and
so there's so much fear of that that the technology
is just being held back. Then at that point it
(01:10:51):
seems to make a good deal of sense to have
as a legislative compromise. Look we're going to impost strict liability.
We may limit punitive damages because this is one area
where we punishment is not good and we don't want
to put them put companies in a position that they
may be financially ruined for making decisions that would cause
some risks to pedestrians, because we want them to have
(01:11:13):
some risk of pedestrians. We don't want them all driving
at ten miles an hour because that diminishes the risk
to pedestrians, right. We want them to balance things, and
we don't then want the jury to get upset at them,
as apparently was the case in the Pinto for Pinto
situation for making cost benefit balancing analysis. So that that
point you come up with this compromise. Why not because
(01:11:33):
in principle that is necessarily that different from the human
driver slash pedestrian scenario. It's just because that seems to
be what the needs of kind of promoting this progress
at acceptable an acceptable level of risk call for in
a particular situation, and in the sense, by the way,
the Protection of Lawful Commerce and Arms Act I think
(01:11:55):
is an interesting example. On one hand, I like common
law on the other hand, and as a general matter,
as a general system. On the other hand, around two thousand,
there was some evidence that some courts in our fifty
state system were imposing basically unprecedented liability on gun manufacturers
which would never have been imposed on car manufacturers or
alcohol manufacturers, let's say, and such. So as a result,
(01:12:18):
there was this possibly imperfect but unbalanced, probably pretty good
restraint on that kind of liability. So I'm just saying
this is something that really calls for case by case
decision making, not just in court, but also in the
legislative process as well as sometimes maybe even in the
common law making process, because I think there are particular
(01:12:40):
reasons to have one or another such standard that may
not apply in other situations.
Speaker 5 (01:12:45):
Can I throw in a fun thing for fun? One
thing fun fun thing for fun?
Speaker 4 (01:12:48):
So in medieval times, sometimes if an object or an
animal hurt you, you could sue them, and the remedy
in some cases would be that you could destroy it
because it's very uncle. You know, there's a lot of
a lot of metaphysical things going on there. But I
genuinely think there's something to this, and like, just go
(01:13:09):
with me for two seconds here.
Speaker 5 (01:13:10):
So there's a guy who like shot a.
Speaker 4 (01:13:12):
Drone rectifying over his house, and obviously it was being
driven by a person, but like the force, the focus
of his ire was this object that seemed to be animated.
Right Like having this this appearance of autonomy makes people
kind of project their their feelings about what's going on,
(01:13:35):
not just onto you know the company that made the
algorithm for the.
Speaker 5 (01:13:39):
Car, but you know that car ran over my foot.
That hmm, right.
Speaker 4 (01:13:44):
And I think there's, you know, an element of tort
law that is about we talk about retribution, there's a
little bit that's about revenge sometimes, and I think we
should also consider being able to take baseball bats to
the cars. I wrote a paper about it's called Revenge
against Robots at twenty seventeen, just throwing that out there.
Speaker 5 (01:14:04):
I think it's fun. I don't know if I mean
it really.
Speaker 2 (01:14:11):
Please.
Speaker 11 (01:14:13):
So I wanted to ask what everybody thinks about the
government pressure problem because I thought that, you know, the
Court ended up kind of punting on the Supreme Court
end up kind of punting on that question. But I
(01:14:34):
think that it even when there is choice among these companies,
if there are a few enough of them, it does
seem like they're a pretty tempting target for government pressure,
and I'm wondering what you all think should how do
(01:14:57):
you think things are working with that? I mean, it
seems like, you know, for quite some time the government
pressure was being pretty successful in directing what people were
seeing and what they weren't seeing, you know, across these
different platforms. It is true that that got halted by
(01:15:23):
you know, Musk buying Twitter, but that could easily.
Speaker 2 (01:15:27):
Have not happened.
Speaker 11 (01:15:29):
And you know, it's not clear that Musk was doing
that out of profit motive as opposed to sort of
not just not liking what was going on. And so
I'm just wondering what people's thoughts are about that.
Speaker 6 (01:15:50):
I have a kind of something else to start the discussion.
I'm not a First Amendment guy, and so this isn't
my area, but I wanted to mention and others on
the panel maybe thinking this too. Even if you don't
have a single entity or just a couple entities, if
you're not able to enter the market because of some
bottleneck elsewhere, and this was the parlor example you're going to,
(01:16:13):
you could have the same problem despite a pretty effective
market because you have a bottleneck at the app stores
or something right, and so I don't know the First
Amendment answer, but I wanted to put that out there,
all right.
Speaker 3 (01:16:25):
So I think that that is a great question, maybe
worth a separate discussion. Let me just try to answer
that because I thought a good deal about it recently.
Apologies if you were in yesterday's ALS social media session
where I mentioned the same thing.
Speaker 2 (01:16:43):
So I think there are.
Speaker 3 (01:16:44):
Two things that two things going on in your question,
and the second one, I think has also things two
ways of looking at it. The first is what happens
if it's government coercion or something close to it. Nra
av Vula was an example of that last year, and
the Court has said, you know the government is actually threatening,
maybe even implicitly threatening retaliation, then that is government suppression.
Speaker 2 (01:17:08):
I think that's right. But the more interesting, more difficult
question is what if there's.
Speaker 3 (01:17:13):
No coersion, but there is kind of government persuasion to
do something. And there are two ways of thinking about it.
One way which actually came up in the Murphy arguments
from two justices who'd been in the White House, Kavanaugh
and and Kken, and they said, look, we would call
up reporters all the time, or we people like us
(01:17:34):
would call up reporters and say you wrote this article,
such a stupid article, just stop propagating this nonsense. And
you know, they were probably thinking that they were doing
a favor to the reporter as well as to to
listen to the readers, especially if this was before the
article was published. It was just let's say that got
wind of it. They're saying, look, yes, we're trying to
(01:17:56):
get you to stop publishing misinformation. Yes we know what's misinformation.
What's not is vague, but it's your job as the
reporter not.
Speaker 2 (01:18:04):
To publish misinformation.
Speaker 3 (01:18:05):
Your readers will thank you for not publishing this misinformation.
Speaker 2 (01:18:09):
We think you'll thank us once you.
Speaker 3 (01:18:11):
Ultimately come to see that we stopped you from publishing
a stupid article that would reflect badly on you.
Speaker 2 (01:18:17):
Or sometimes if it's not just misinformation, call on someone.
I'm saying, we hear you're writing an.
Speaker 3 (01:18:21):
Article about this particular thing that's happening. If you publish it,
this treating negotiation will be over. Or if you publish it,
this this foreign intelligence action which is really important for
the country, is going to be blown and that's going
to be bad.
Speaker 2 (01:18:37):
Or if you publish this, then just this particular law.
Speaker 3 (01:18:40):
Enforcement action is going to be bad, is going to
be stopped or essentially people will be alerted to it.
Speaker 7 (01:18:47):
Can you see a.
Speaker 2 (01:18:48):
Way clear not to talk about it? Right? I mean
that's perfectly normal. Is it pressure? It's certainly tempted moral suasion.
Speaker 3 (01:18:54):
There's certainly some risk, perhaps indirectly that will be retaliation,
but but we accept it. Now, let me tell you
the other side of this, and this is an example
that I'm not sure how it plays out. I happen
to own some real estate in California. Thankfully it didn't
burn down. One of the house I almost I'm sorry.
(01:19:15):
I sold two years ago. My wife and I did
burned out. The other stuff is still standing and it
has tenants, and I have the right under the lease
to go and look around there. Under certain conditions, I
may need to give them a warning. The police don't
have the same right right without a warrant. So let's
say police department calls me up and says, well, first
(01:19:35):
of all, let us tell you if you want to
say no, say no't We won't. We'll never hold it
against you, and you can trust us on this. You know,
we're just appealing to your to your good citizenship. But
there's somebody your tenant we think maybe committing some crimes.
Can you maybe in a couple of days, who have
to giving you a warning, go and look around and
report to us.
Speaker 2 (01:19:54):
You know, that's not coercion. Maybe that isn't even much
pressure other than moral suasion.
Speaker 3 (01:20:00):
But that, it turns out, makes my search a Fourth
Amendment a government search for Fourth Amendment purposes likewise as
to certain kinds of race discrimination, certain kinds of interrogation,
and such. So we do have analogies in other areas
which suggests that even persuasion is something by the government,
is something that triggers the First Amendment, even when the
(01:20:20):
persuaded party had every right as a private property.
Speaker 2 (01:20:24):
Owner to do the thing he was persuaded to do.
Speaker 3 (01:20:27):
Which way we should analyze the First Amendment questions?
Speaker 2 (01:20:31):
I don't know.
Speaker 5 (01:20:31):
Can I add a thought?
Speaker 4 (01:20:33):
There's another kind of maybe problematic pressure, which I think
is just so I'm low confidence in both of these thoughts.
Kind of problematic pressure is you the platform owner just
kind of want a quiet life.
Speaker 5 (01:20:48):
You don't want to be like.
Speaker 4 (01:20:50):
Criticized or possibly like you don't even know if someone's
going to like investigate you or do something or call
you in front of Congress, but you're just sick of
going before Congress.
Speaker 5 (01:20:57):
And having to like answer a bunch of questions.
Speaker 4 (01:20:59):
And the thing that I see why that's a problem,
and the thing that you know having thought about it
for five seconds, it's so hard to figure out what
you could how you could intervene in that case to
prevent it without really distorting the way people just speak
and communicate right in people's motivations. The one thought I
(01:21:24):
have is kind of riffs off your Fourth Amendment claim, Eugene,
because the Storage Communications Act covers situations where you'd have
third parties that may or may not have there may
or may not be a Fourth Amendment issue, third parties
that sort of hold the communications of others in an
Internet context. And not only does it say to force
(01:21:48):
them to give over certain information, you need to make
certain showings, which can be like a warrant or lesser
things in the cases where you don't have Fourth Amendment protection,
it also says that the private companies cannot volunteer to
turn over the covered things covered the kind of communications
covered by the statute unless those other showings are made.
(01:22:13):
And I don't know what the equivalent of that would
be in this context. But that kind of move is
something plausible to address the issue.
Speaker 11 (01:22:23):
Other thoughts on this, I guess I would add I
don't think this needs to absolutely be thought of as
only a First Amendment question. There's also a policy question.
Speaker 3 (01:22:35):
Potentially, you can imagine a statute that says that here
are the situations of which the government can reach out
to speakers or editors, and other situations it can't. Or
maybe here are the situations where it can do so confidentially,
and as to other things, it has to say it publicly.
So if the government wants to sort of say, what
(01:22:57):
an outrageous thing it is that Facebook is allowing x Y,
And if the government officials want to do that, you know,
that's one of the reasons we elect them is to
express their views. But they better do it publicly so
that we can figure out what's actually going on. And
for privately, it's only limited in situations where there's some
risk of something exposing certain.
Speaker 2 (01:23:15):
Secrets or whatever else.
Speaker 3 (01:23:16):
So it maybe that the right solution is some sort
of legislation.
Speaker 2 (01:23:19):
I don't know, should we talk about TikTok?
Speaker 3 (01:23:24):
There's no other well, but but but the argument.
Speaker 2 (01:23:27):
You said that the argument was over.
Speaker 5 (01:23:29):
It was no, it was still going.
Speaker 7 (01:23:31):
It was still going.
Speaker 2 (01:23:32):
I only heard the first hour and a half. Yes,
what do you want to tell us about it?
Speaker 5 (01:23:38):
I don't know.
Speaker 3 (01:23:39):
I don't know my prediction, and I'm very bad at
these predictions. I have very bad track record.
Speaker 2 (01:23:44):
But nonetheless I'm mostly taking the no predictions pledge. But
in this instance, my.
Speaker 3 (01:23:50):
View is, first, you're going to have hard time finding
five judges who will try justices who will say I
disagree with Doug Ginsburg and Naomi Rao ANDASA. The fact
that you have a Reagan judge, of Trump judge, and
an Obama judge, all three of them, all three of
(01:24:11):
them reach the same result makes it just hard to
see how five justices would disagree with them. And from
when I heard of the oral argument again only dealing
with the one side when the challengers, maybe Gorsuch.
Speaker 2 (01:24:23):
Had some sympathy for their position, I didn't hear it
from pretty much any other justice. But that's just prediction.
Speaker 4 (01:24:30):
My worry is not so much about TikTok, but that,
depending on what's written, of course, is how much it
allows for protectual.
Speaker 5 (01:24:39):
Claims by the government that this is this could be
in the future a national security risk.
Speaker 4 (01:24:45):
And therefore, because there didn't seem to be a lot
of barriers in the way people were talking about it
that would prevent the top you know, the issue from
like extending beyond the specifically designated foreign advert foreign advert
series in the statute, et cetera, et cetera. I was
also really surprised that they in the in the discussion
(01:25:08):
about whether the government had a permissive the legislature had
a permissible purpose. The idea it almost you know, it
did feel like everyone was flipping between, well, what's the
reason why this statute is coming is coming to play?
Speaker 5 (01:25:23):
And it felt a little bit.
Speaker 4 (01:25:24):
More rational bassy that I was comfortable with in terms
of like searching for is the reason good?
Speaker 5 (01:25:29):
Is this rationale? Okay?
Speaker 4 (01:25:32):
Compared to the kind of more exacting discussion you would
expect in a strict scrutiny way.
Speaker 5 (01:25:37):
I'll say my third.
Speaker 4 (01:25:38):
Hot take is I was I was interested that a
couple of justices did talk about the levels of scrutiny
because I've been kind of waiting for the levels of
scrutiny to go after seeing like all the recent Second
Amendment cases and some others that are skipping my mind,
just like.
Speaker 5 (01:25:52):
Not a net choice. They don't talk about it either, right.
Speaker 4 (01:25:55):
This seeming moving away from it in decisions without explicitly
saying they are.
Speaker 5 (01:26:00):
So who knows if it comes up in the decision.
Speaker 8 (01:26:04):
If I could pose a question. So I have a
lot of friends who are on TikTok. I'm in Los Angeles,
Like everyone makes their money off TikTok, So there's a
very strong economic incentive just actually powering this app. Let's
just suppose the app actually does get banned, right, what
is the response to the reality that everyone's going to
be using basically third party black market versus TikTok that
are probably even more dangerous and less supervised, et cetera.
(01:26:28):
And is the approach to just continuously go after these
and try to find you know, scorched earth it or sorry.
Speaker 2 (01:26:37):
I'm sure you say everyone. Uh My sense is that, I.
Speaker 3 (01:26:44):
Mean, I think any of us could go and use
a VPN to sure, yeah, access something from a remote
site and so on and so forth.
Speaker 2 (01:26:53):
But the way these platforms work.
Speaker 3 (01:26:58):
Is by making it super pretty easy for people to
access it, you make it a little bit more difficult
than instead of one hundred and seventy million users in America.
How real that number is I don't know, but let's
say that I think you may have seventeen million or
maybe one point seven million, and I think the government's perspective,
it would.
Speaker 2 (01:27:16):
Be okay, we can handle that. We don't need perfection here.
Speaker 3 (01:27:20):
If our worry is about harvesting of Americans private data,
that's a lesser worry if the Chinese can only get
a few Americans private data. If the worry is about
about Chinese influence over American public opinion, it's a much
lesser worry. If it's just kind of a boutique thing
that if you kind of hardcore tech savvy people do right.
Speaker 2 (01:27:44):
Or am I wrong?
Speaker 3 (01:27:45):
And is it the case that indeed when it's shut down,
like there'll be new apps they're not available in the
app stores, but people downloaded, and you will indeed have
and you work close to the current market share.
Speaker 8 (01:27:57):
I like two thought that. I think it's almost both.
I think there probably will be some alternative. I hope
it is much safer and us made honestly, or one
of the other platforms, right, one of the other platforms
does actually innovate and they've tried obviously.
Speaker 7 (01:28:12):
With shorts and all these different things on every platform.
Speaker 8 (01:28:15):
I do think given tiktoks, I mean, I don't have
it because it's it's actually extremely addicting and it just
eats your life away. But I do think there are
much more people than we think who we think are
tech savvy. But a lot of people I know have
VPNs who are not tech savvy. You can just get
one and download it.
Speaker 2 (01:28:32):
It's what fraction of the one hundred and seventy million
do you think have.
Speaker 8 (01:28:36):
Vp I think that's a hard question for me to answer,
but I do think that I have quite a few friends,
like I said, who who have VPNs. I don't think
it's actually huge hurdle if they are addicted to a
social media app that they're like, I spend say ten
hours a day on this or even more in some
weird cut and pace way that if they're like, all
(01:28:56):
I need to do is go on nord VPM or
something and pay five hours a month and suddenly I
get this addiction back. Even if it's not one hundred
and seventy million, let's just say it's even fifty million,
I think the question is still raised.
Speaker 7 (01:29:08):
What's the threshold there, And I get.
Speaker 8 (01:29:11):
Your point if it's say a million or two million,
But I do worry about is what is the bottom
of that number? And are we going to continueus? You
just keep going down that rabbit hole. Also, you know
at this point it's it's the attack is much more
on now security side. But Facebook and Meta also brought
your brain pretty badly. And I think if we were
(01:29:31):
talking about threats of democracy and all these things, there's
no you know, I've seen things on social media that
are US owned that have definitely learned my IQ points.
So I'm just wondering where, once again, where the where
the line is drawn.
Speaker 3 (01:29:45):
But there the answer at least that the government has
given sure Sure is quite clear, which is to say,
we're perfectly aware that we cannot ban a platform simply
because we don't like the view. We're reconciled with that.
There's a lot of case law on that. But it's
different when it's something that is indirectly controlled by probably
one of our most serious foreign adversaries.
Speaker 2 (01:30:08):
And yes, so probably what will.
Speaker 3 (01:30:11):
Happen is some of the one hundred and seventy million
will use VPNs, A lot will switch to some other platforms,
some other platforms will arise to take advantage of this opportunity.
Speaker 7 (01:30:21):
We're good with that.
Speaker 1 (01:30:21):
Yeah, I don't think this is necessarily going to play
a role in the court's decision making, But I certainly
agree with Christina's sort of implicit nose count argument that
there's not much sympathy at the outset based on what
the questioning so far has shown. There were four votes
to grant cert, but not a fifth vote to grant
(01:30:44):
the emergency application for a stay pending review that accompanied
the cert petition, which means that the position of TikTok
is already a little bit behind the eight ball. Now,
they didn't deny, they just said the application for a
stay is deferred pending arguments. We may see a decision
on that today or over the weekend or Monday or whatever.
Speaker 2 (01:31:08):
My sense is that if the.
Speaker 1 (01:31:10):
Likelihood of success prong of the stay requirement ends up
being okay, well the request for a stay is denied
or the request for a stay is just not ruled upon,
and the nineteenth arrives the nineteenth of this month, then
it kind of doesn't matter what the eventual decision says,
at least as regards TikTok. But the political consequences that
(01:31:33):
flow from them. And this is really the threast of
priviose questions what happens if TikTok goes the way of
the Dodo on the app stores. Is so much of
the concern that seemed to be animating President like Trump's
favor for TikTok is that well, Meta is really the
(01:31:55):
company to whose benefit it will redound if TikTok is
taken out of the market place. And at the time
Meta was not particularly in political favor, and now you
know they've They've come a long way.
Speaker 2 (01:32:07):
Baby. You know, there's.
Speaker 1 (01:32:10):
The change in Meta's political position is I think relatively
closely connected to what kind of political solution might be
broken in TikTok's favor even if TikTok loses in the
Supreme Court. And the real question is it going to
(01:32:30):
come to that? And we might know as soon as
the emergency application is either granted or denied what the
likelihood of that political calculation changing ends up being. So
that's I don't know how hot to take that is.
It seemed pretty bland, but.
Speaker 2 (01:32:45):
Hey, let me ask you this.
Speaker 3 (01:32:46):
I'm not I'm not at all confident about this, but
what do you think are the chances that the court
will essentially summarily affirm the decision below on the grounds essentially.
Speaker 2 (01:33:01):
Well, I usually don't.
Speaker 3 (01:33:02):
And this is the Actually, this is so hard to predict,
but there's actually some merit to that.
Speaker 2 (01:33:06):
Right.
Speaker 3 (01:33:07):
They're working on a very short timetable. They appreciate the
risk of error. They appreciate that if they set.
Speaker 2 (01:33:14):
Some precedent that could do other things in the future.
Speaker 3 (01:33:18):
If they granted review, not because they really wanted to
resolve the big picture legal issues, but because they thought
it was a really important question. They're called for judgment
by not just a random three judge panel of a
lower court, excuse me, an inferior court, as the Constitution says,
(01:33:38):
might they.
Speaker 2 (01:33:39):
Just say, and again I'm not all sure that they will,
but might they.
Speaker 3 (01:33:42):
Just say, this is a good opportunity to say the
bottom line is this and limit the damage we could do.
Speaker 4 (01:33:47):
So it's almost like limiting the decision to its facts.
Speaker 3 (01:33:50):
Well, that is exactly what a summary. That is exactly
what a summary of firmans does. It is precedent only
for the decision actually made in that case.
Speaker 4 (01:34:00):
I think you've made a good prediction there James, it's
not a prediction.
Speaker 1 (01:34:05):
So here's the thing about that the DC Circuit held, assumed,
without deciding and without going through a reason analysis, that
the level of scrutiny that ought to apply and correctly
applies is strict scrutiny. They consciously punted on that and said, well,
it satisfies strict scrutiny, so we don't need to decide
whether strict scrutiny or intermediate scrutiny is available. That seems
(01:34:27):
to me a reason, maybe not a dispositible one, but
a reason not to simarily affirm, because that could be
something that the court, if the DC Circuit had done
an actual like here's what we think applies in hears
why I'd be more inclined to interest because that seems
to me, even the presidential value of the decision as
to its facts, leave a very important antecedent question for
(01:34:50):
future congressional behavior completely unanswered.
Speaker 2 (01:34:53):
Oh yeah, yeah, yeah, yeah.
Speaker 3 (01:34:55):
You're right, as summary affirmans will provide minimal information other
than this law is constitutional. The question is whether they'd
be willing to do that. But you're right, you're right.
That's always the problem with the semi affirments. So sketch,
It's okay, this is constitutional for some reason.
Speaker 1 (01:35:12):
It was clearly an active as a trial balloon, right,
that's the thing.
Speaker 6 (01:35:15):
So this gives me I won't call it a predictional
I'll call it a modification of Ezoulics prediction.
Speaker 2 (01:35:20):
Not a prediction.
Speaker 6 (01:35:22):
But so you could see the court say we affirm,
and not for reasons stated by the day Circuit, but
for reasons to be describing in a later opinion.
Speaker 2 (01:35:33):
Oh, that would be the responsible thing to do.
Speaker 3 (01:35:35):
We know that it's a very short time frame, but
but I'm wondering if they might for some of the
reasons you mentioned, like the worry is what the opinion
will say, and will the opinion inadvertently do these things.
Speaker 2 (01:35:47):
I think even.
Speaker 3 (01:35:47):
Justices on the Court who are generally quite take quite
speech protective views, think this has got to be permissible
for the government to say, we just don't want minus
China influence over American public debate in various ways. But
we also don't want to set a dangerous president for
the future. So but you're right that sort of as
(01:36:09):
a matter of jurisprudence, Yeah, of course, if it's a
short time fused they should still give us guidance for
the future, even if even if they had done the
decision in the next week.
Speaker 2 (01:36:21):
Josh So, I spent three hours listening to argument this morning.
Did anyone else what were.
Speaker 5 (01:36:26):
You doing to do the panel?
Speaker 2 (01:36:28):
I was listening while at another panel.
Speaker 5 (01:36:31):
So walk in and they come to the microphone.
Speaker 12 (01:36:38):
Did they beat up the SG as much as they
beat up the challenge going into the argument? I agree
with Eugene. I actually agree with you that the government
was likely to prevail. I am less confident. Now let
me just explain why twofold. So First, the Trump brief,
which was widely derided, was mentioned Faivfly a couple of times.
(01:36:58):
I think Barrett in particular said, well, maybe the president
that can make something happen, so scoffing you will, I
think actually was an effective brief.
Speaker 2 (01:37:05):
Number two.
Speaker 12 (01:37:06):
I wrote a post about two weeks ago saying the
cork and grant was effectively an administrative and injunction.
Speaker 2 (01:37:11):
So the court often do it. Administrative says, say we're just.
Speaker 12 (01:37:13):
Gonna put this on hold, but that's put one hold
an injunction. Just put an injunction place without regard to
the merits. And when you combine the Administry of injunction
with President Trump. Let's make a deal. The court has
never see any of this stuff. You're saying, is Greenland
part of the deal?
Speaker 7 (01:37:30):
No?
Speaker 13 (01:37:30):
No, No, the Panama Canal. They trade ticked Off to
the Panama Canal. That's the deal China. China gives us
the canal, they get to keep Tikoff in America. I
think it is actually a good deal, right, it's actually
a good deal.
Speaker 12 (01:37:41):
Anyway, I wrote to my blog post Gene, actually you'll
suit and you have the stage. But the point is,
Alito asked SG pre Laguer point blank, can we just
grant he called it Administry of say he's wrong, it's
a mystery of injunction.
Speaker 2 (01:37:54):
And she's like, well, you shouldn't do that. But he's like,
can we And she's like, this is real important. And
then I think Barrett also hinted that might be an option.
Speaker 12 (01:38:02):
So I think what we see in the next couple
of days is a short order from the Court saying
we enter an administrative injunction of the law until further
order of this Court, which could mean until Trump we trade,
we get the Panama Canal back or cut, or until
(01:38:24):
they divest or they write an opinion whatever.
Speaker 2 (01:38:28):
I think.
Speaker 12 (01:38:28):
There are four clear votes to A from the d
C Circuit Force, Easy Roberts, Thomas, Alito, Cavanaugh. There's no
doubt Justice Gorcis has libertarian hat on today. He was
very skeptical with government. Well, there are all these factual dises.
When he's Boris, he's talking about facttional disputees. You know
where he's going. Sodomaya was trying really hard to broke
her copper. She was really trying hard, but just nothing
(01:38:50):
was working.
Speaker 2 (01:38:50):
Barrett.
Speaker 12 (01:38:51):
I think, similar to your point, she didn't quite know
what the right little scrutiny was. She was really struggling.
She rejected the argument that there's no screen nodal. She
said there has to be some right and because she
says there's Thomas like there's no free speech issue of
play here. He said that pretty clearly. But Barrett was
really struggling, and I don't think she knows, to be honest.
Speaker 2 (01:39:08):
I think she's still sort of working your way through it.
Speaker 12 (01:39:11):
And then Kegan was kind of just tagging along and
trying to figure something some other compromise. Jackson was a
bit all over the place, So I think the court
is fragmented in ways I didn't anticipate, so they don't
have the power to do anything in the next eight days.
I just don't think they're there, which is why the
administrative stay is.
Speaker 2 (01:39:26):
Like, Okay, let's see what happens.
Speaker 12 (01:39:27):
And seriously, if they divest Trump makes a deal, this
case just drops off the docket and vanishes and there's
no president of the age is set.
Speaker 3 (01:39:36):
Right, And even if they're sympathetic with a Trump brief
but don't want to be seen as that, they don't
have to explain me exactly or they can even say,
in light of the complexity of the issues involved and
the need for this court to render a reason decision,
we're doing it on our own.
Speaker 2 (01:39:51):
Now.
Speaker 12 (01:39:51):
Now, Vladik and Leaderman said, I'm insane, I'm wrong. They
said that under the old Ritz Act, court lacks.
Speaker 2 (01:39:56):
Jurisic can do that. I think they're wrong. I think
its jurisdiction to do, a jurisdiction to.
Speaker 12 (01:40:01):
Do in eight of their jurisdiction. They need more time.
I think that's an either jurisdiction. If the RITA, I
think they can do it, and I think we have
that order. Maybe even the next couple of days, so sorry,
I'm out of panelist.
Speaker 2 (01:40:12):
But to the argument, that's why it's the six man.
Speaker 1 (01:40:20):
Is that time, so we have a few minutes if
somebody wants to.
Speaker 3 (01:40:26):
So I have a question about I think something that
came up with regard to the parlor issue and the like.
So here's one thing that that that I've been thinking
about that's related to social media.
Speaker 2 (01:40:39):
But it's an even thornier question in some respects.
Speaker 3 (01:40:42):
Let's assume that people stop Googling and start chat gpting
to answer their questions, which in fact they're already doing,
and in fact Google is already providing these results automatically.
Speaker 2 (01:40:53):
There's no reason to doubt.
Speaker 3 (01:40:55):
That they would do the same for political questions they
want to know, like what would this proposal, which is
the better policy?
Speaker 2 (01:41:02):
Even which candidates should I vote for?
Speaker 3 (01:41:05):
That's why search get ten results and have to look
at them when you can get an answer.
Speaker 2 (01:41:11):
Now, maybe a well educated consumer would not do.
Speaker 3 (01:41:14):
That, but many people will do that just because they
don't want to spend a lot of time. We know
that people are rationally, even rational people are rationally ignorant
about politics. So the result will be that basically, if indeed,
say Chat, GPT and Google, Gemini and maybe one other
(01:41:35):
company have the dominant, the overwhelming majority of the market share,
as they do now, the consequence will be that they
will have extraordinary political power, right, political power that they
could be pressured by foreign and domestic governments to use,
or political power they may just use themselves.
Speaker 2 (01:41:52):
They'll become the king makers.
Speaker 3 (01:41:54):
There may be some pushback on this in various ways,
people blowing the whistle and such, but in part because
it's not terribly transparent, often very hard hard to prove.
Speaker 2 (01:42:02):
In any event, that's a pretty serious problem.
Speaker 3 (01:42:05):
One possibility is the solution is worse than the problem.
Among other things, even if you can mandate viewpoint neutrality
for social media platforms as to at least what they host,
you can't mandate imagine a content neutral AI algorithm, right,
That would be awful, even a viewpoint neutral one.
Speaker 2 (01:42:21):
It wouldn't be good, right, because if I want an
answer to like how old is the Earth?
Speaker 3 (01:42:26):
I want generally a geological answer, not a theological answer,
at least by default, and presumably it would want to
kind of give me mainstream answers. Or if I want
what are the best arguments against abortion. I don't want
a viewpoint neutral answer. I want the viewpoints that are
actually most sensible, most plausible. Okay, So one possible answer
is competition. So now let's turn to the copyright question.
(01:42:49):
I think they're perfectly plausible copyright arguments against what AI
companies did. Perfectly plausible arguments are there that they are infringing,
and the consequence of those arguments will obviously not be Okay,
we shut down all our lens.
Speaker 2 (01:43:04):
It's just a matter of money. Right.
Speaker 3 (01:43:06):
Let's imagine there's a judgment against Google. Google says, okay, fine,
we'll make a deal ten billion dollars, fifty billion dollars
split in some ways between authors organizations. And maybe we
like that deal because we're going to get a license
and our future competitors may we'll pay fifty billion dollars,
the future.
Speaker 2 (01:43:25):
Competitors won't have fifty billion dollars.
Speaker 3 (01:43:28):
Or even so, one possibility might be maybe we don't
want copyright a protection there, even if as a matter
of abstract copyright theory it makes sense. Or in other possibilities,
maybe we want some sort of limit, like a requirement
that the license be be kind of a royalty rather
(01:43:48):
than a flat fee.
Speaker 2 (01:43:49):
But let's say it's a royalty.
Speaker 3 (01:43:50):
Even let's say let the equivalent of ASKAP and BMI.
I say, okay, fine, we'll have a royalty arrangement, but
you have to get a license from us. Then somebody says, well,
all these entities lean to the left or lean to
the right or whatever else, all these platforms, we're going
to come up with a new.
Speaker 2 (01:44:07):
One, new parlor.
Speaker 3 (01:44:08):
Parlor AI and the National Book Guild or whatever it
is that administer licensees say well, no, no, no, you know,
we are private actors.
Speaker 2 (01:44:17):
We are entitled to block you.
Speaker 3 (01:44:19):
From using all of these models because we just don't
want to be associated with evil ideas.
Speaker 2 (01:44:25):
You just spent pretty much what happened to Parlor.
Speaker 3 (01:44:28):
So even if we think that the solution is competition,
which I think may very well be the case, one
question is to what extent to existing rules, including legitimate
property rights rules, potentially interfere with that competition.
Speaker 1 (01:44:45):
Well, since we are a time, let me offer the
last word too.
Speaker 9 (01:44:48):
Yeah, I just wanted to know why Eugene thinks the
problem of an AI telling you how to vote is
any different from the problem of the New York Times
telling you how to vote that the Post lost the
whole lot of subscribers when it didn't tell people how
to mode. Right.
Speaker 3 (01:45:03):
So I think that the problem with media platforms having
too much power is a real problem. But again we've
long concluded the cures worse than the disease, in part
because the disease may be the New York Times. But
for all we talk about the New York Times, and
important as it doubtless is in New York elections, if
not New York Times as such, but local newspaper may
be the only newspaper in town and may be very
(01:45:26):
important in local elections.
Speaker 2 (01:45:27):
It's just one player.
Speaker 3 (01:45:30):
But if you really have these three entities that have
control over what is the media diet of most Americans
and maybe most non Americans, then you're talking about a
larger problem of larger degree. It's still a difference in degree,
but sometimes differences a degree become differences in kind. Well,
we did last face that, which was when we had
(01:45:53):
three broadcast networks that were seen as having very broad
nationalwide control.
Speaker 2 (01:45:58):
The consequence was a good deal of regulation.
Speaker 3 (01:46:00):
I actually don't like a lot of that regulation, but
it's understandable that people might be worried about ABC and
BCCBS throughout the whole.
Speaker 2 (01:46:08):
Country in a way that they're even more.
Speaker 3 (01:46:10):
Worried than they were with regard to the New York
Times of the Washington Post. And I would say, if
it's just Microsoft and Google, which is what it's looking
like and.
Speaker 2 (01:46:17):
Maybe Claude, we should be even more worried.
Speaker 3 (01:46:20):
Again, that doesn't tell us whether the cure is worse
than the disease, but it does suggest that our minor
a little cold may be getting into pneumonia.
Speaker 1 (01:46:30):
Okay, well with that, please join me in thanking our
terrific Pam