All Episodes

September 22, 2025 • 37 mins

First Amendment expert Timothy Zick, a professor at William & Mary Law School, discusses hate speech and why it’s protected by the First Amendment. Collin Walke, a cybersecurity and data privacy partner at Hall Estill, discusses recent lawsuits where parents are blaming chat bots for their teenagers suicides. June Grasso hosts.

See omnystudio.com/listener for privacy information.

Mark as Played
Transcript

Episode Transcript

Available transcripts are automatically generated. Complete accuracy is not guaranteed.
Speaker 1 (00:02):
This is Bloomberg Law with June Grossel from Bloomberg Radio.

Speaker 2 (00:09):
US Attorney General Pam Bondi threatened to go after hate
speech on a podcast last week. We will absolutely target you,
go after you if you are targeting anyone with hate
speech anything, and that's across the aisle. Bondy was wrong.
Hate speech is not a crime. In fact, hate speech

(00:31):
is free speech protected by the First Amendment. Just ask
the Supreme Court. Conservative Justice Samuel Alito wrote in twenty seventeen, quote,
the proudest boast of our free speech jurisprudence is that
we protect the freedom to express the thought that we hate.
Bondi's remarks true criticism across the political spectrum, and she

(00:55):
tried to walk them back with some confusing posts on
x is. First Amendment expert Timothy Zick, a professor at
William and Mary Law School, Tim, can you define hate
speech for US?

Speaker 3 (01:09):
Well, it doesn't have a definition in US law or
First Amendment jurisprudence. There's no category of hate speech that
is unprotected under the First Amendment. That's in contrast to
European countries and other countries that do have statutory prescriptions
on speech that derrigates or criticizes people based on gender

(01:32):
or race or some other protected characteristic, but if the
Attorney General should know. In the United States in general,
hate speech is not criminally prescribable.

Speaker 2 (01:43):
The Supreme Court has protected hate speech in more than
one case. The one that stands out in my mind
is Brandenburg versus Ohio, the case involving the Nazi Party
marching in Skokie, Illinois, in nineteen seventy seven.

Speaker 3 (01:58):
Well, the Supreme Court, in that case and others, has
come down on the side of freedom of expression right
in the sense that the government cannot criminalize or otherwise
punish the expression of viewpoints, even if those viewpoints are
offensive or vile or derogatory. Right, So, even speach in

(02:19):
support of Nazism, this is a general matter protected speech.
Speech that offends people based on race, or gender or
sexual orientation, that's also protected speech. And the Supreme Court
has been consistent in drawing that line where it has
in the sense that you know, whether it's Nazis marching

(02:40):
in skoki that case reached the Supreme Court is actually
a lower court case that decided, Look, the town of
Skokey cannot enact all these ordinances to try and prevent
Nazis from marching or displaying Nazi regalia. The Supreme Court
protects viewpoints even if they're vile. There are some narrow exceptions, right,
If you threaten another person's bothering or death, if you

(03:01):
incite other people to engage in imminent unlawful activity that's
likely to occur, those sorts of things are not protected.
The government cannot have the power to tell an audience
in the United States what speech is appropriate or too
offensive to be heard.

Speaker 2 (03:18):
But the Court has recognized an exception to the First
Amendment for threats of violence. How are they defined?

Speaker 3 (03:26):
That's a narrow exception to First Amendment protection. Right, So
if you communicate what the Court is defined as a
serious expression of an intent to inflict bodily harm or
death on another person, then you can be punished for
that kind of speech. But the narrowness here is sort
of like, you know, it has to be a serious expression.

(03:46):
It can't be something said in jets. It can't be
hyperbolic language where you say, well, this person should be
home for their crimes, right, that sort of thing. It
has to be more directed, more specific, and as a
Supreme has recently said uttered recklessly that you know there's
a risk when you say the words that a person
will perceive what you're saying is threatening, but you say

(04:08):
it anyway. So it's not just threatening language. That's not
unprotected speech. It's something far more specific than that. And
when the Attorney General said, well, what I meant to say,
it wasn't hate speech, really, it was threat Well, none
of the speech that we've been talking about since Charlie
Kirk's assassination, you know, constitutes threats. Right when you praise

(04:30):
or celebrate someone's death, that's not a threat. So you know,
she got it wrong twice. Essentially.

Speaker 4 (04:36):
She also said in her explanation, you can't call for
someone's murder. You cannot swat a member of Congress, you
cannot dos a conservative family and think it will be
brushed off as free speech. These acts are punishable crimes,
and every single threat will be met with the full
force of the law.

Speaker 2 (04:55):
Are all the things she mentioned punishable.

Speaker 3 (04:58):
Well, some of them are protected, right. It depends on
the sort of statute that you're looking at, you know,
how narrowly it's defining harassment, for example, or threat calling
for the murder of someone is not incitement. It is
not a threat.

Speaker 1 (05:14):
Right.

Speaker 3 (05:14):
I wish you know so and so would die is
a terrible thing to think, a terrible thing to say.
But it's not unprotected expression under our First Amendment doctrines
and jurisprudence. Yes, there's conduct that you can go after.
If I repeatedly harass someone, whether it's online or offline,
then that can rise to the level of harassment. But

(05:37):
there I'm not being punished for my expression. I'm being
punished for the act of repetitious harassment of another. And
docting is difficult, right, because just publishing information about, say,
where someone lives, is not necessarily unprotected speech, right. A
lot depends on the contact. And again, as they said,

(05:57):
it's statute under which you're reviewing it.

Speaker 2 (06:00):
And wasn't there a Supreme Court case a few years
ago involving threats on the internet.

Speaker 3 (06:07):
Counterman versus Colorado. Yeah, that was this very recent threats
case the Supreme Court handed down. There was a singer
who had some uninvited online messages, tried to block the
person from contacting her. He just opened new accounts and
kept contacting her, and eventually this person was prosecuted for

(06:29):
form of harassment, which is really what the case sort
of set up as, but the court below and then
the Supreme Court treated it as raising the question of
whether this person had communicated we are called true threats,
as I described earlier, serious expressions of an intent to
cause bodiling, injury or death to another. And what the
court was wrestling with in that piece is the mental

(06:52):
state of the speaker. Well, how do I know whether
the person's communicating a threat? And they say, if the
person knows of a substantial risk that the person he
is communicating with is when it perceives the speech is threatening,
then that's the kind of recklessness that the First Amendment
requires before you label something a true threat. So the

(07:13):
court was grappling with a really important sort of technical
issue in that case, which was a mental state required
for the speaker, and a number of courts before that
had sort of adopted this subjective test. Well, if I'm
the audience for that speech and I perceive it subjectively
is threatening, that should be enough. And the court was worried, well,
that's not speech protective enough. That's going to cause misunderstandings

(07:36):
to be translated into criminalized threats. We don't want that,
but we also don't want a sort of lower standard,
so let's find something in the middle, sort of a
goldilocked standard. And they settled on recklessness.

Speaker 4 (07:50):
And so the Supreme Court has been would you say,
particularly protective of free speech rights?

Speaker 3 (07:57):
I think that's its reputation right. I could probably come
up with exceptions to that, but in general, I think
it's fair to say that they're protective of freedom of speech.

Speaker 1 (08:07):
Yet we have, you.

Speaker 4 (08:08):
Know, the Attorney General's remarks. We had Todd Blanche, the
Deputy Attorney General, said in an interview that people protesting
while President Trump had dinner at a restaurant might have
committed a crime. You have President Trump saying to ABC
news reporter will probably go after people like you because

(08:29):
you treat me so unfairly. It's hate. Why is there
this misunderstanding of hate speech?

Speaker 3 (08:37):
Well, I don't know if it's a misunderstanding. I mean
this has been sort of President Trump's a longstanding position.

Speaker 4 (08:43):
Right.

Speaker 3 (08:43):
He either doesn't understand or doesn't appreciate freedom of expression.
So his view is that negative press isn't protected. You
can pull the broadcast license of a broadcaster that publishes
critical coverage of him. It consistingly negative right, That, of course,
is contrary to the First Amendment. Going after your political

(09:06):
enemies for things that they say is part of the
sort of Trump mantra, but it is unconstitutional. And you
know what's interesting to me is recently people have said, oh,
we've crossed some line here where the president is threatening
retribution at this political enemy. We are nine months into
a retribution campaign. It's gotten louder, but it's been there

(09:27):
the whole time. I mean they've gone after law firms,
international students, the American Bar Association, and plenty of others
up to this point. What's different is it's more explicit,
I suppose one could say, and the drumbeat is getting louder.
We're going to go after particularly so called left leaning
speakers or organizations who say things that we don't like,

(09:51):
and the First Amendment stands in complete opposition to that position.
So what's changed. I mean, the kirk assassination horrific event,
bound to create, you know, sort of turn and backlash,
but the administration's answer to that has been again, we're
going to go after the left so called and we're

(10:12):
going to punish speakers who say nasty things about Charlie
Kirk or President Trump, and the First Amendment just simply
doesn't allow them to do that.

Speaker 2 (10:23):
Coming up next on the Bloomberg Lawn Show, I'll continue
this conversation with Professor Timothy Zick of William and Mary
Law School. People across the country, from teachers and lawyers
to airline pilots and healthcare workers, have been fired, suspended,
or disciplined over social media posts about Charlie Kirk and

(10:43):
his death. Why the First Amendment doesn't protect them. In
other legal news, today, the Supreme Court said it will
hear a Trump administration appeal that could topple a ninety
year old president and put the White House in control
of federal agencies that have long been independent. The Court's
conservative majority also refuse to let the person at the

(11:07):
center of the case, Federal Trade Commission Member Rebecca Kelly Slaughter,
returned to her job during the appeal. Trump is trying
to fire Slaughter despite a law that says commissioners can
be removed only for specified reasons. The showdown gives conservatives
and regulation opponents the chance to achieve a long sought

(11:28):
goal of overturning the Supreme Court's nineteen thirty five ruling.
And remember you can always get the latest legal news
by listening to our Bloomberg Law podcast wherever you get
your favorite podcasts. I'm June Grosso and this is Bloomberg.
Last week, Vice President j D Vance encouraged people to
report anyone celebrating Charlie Kirk's murder to their employers.

Speaker 1 (11:53):
So when you see someone celebrating Charlie's murder, call them
out in hell, call their employer.

Speaker 3 (11:58):
We don't believe in political violence, but we do believe
in civility.

Speaker 2 (12:03):
And across the country, people have been fired, suspended, or
disciplined over social media posts about Kirk, from teachers and
lawyers to airline pilots and healthcare workers. Many employers have
cracked down on remarks they deem inappropriate. I've been talking
to Professor Timothy Zick of William and Mary Law School.

Speaker 3 (12:24):
The rules are different for private and government speakers. Right,
So if you're talking about a private employee and at
will employee who can be dismissed for any reason at
all or no reason, then they can be dismissed for
speech that they publish or communicate. There are only a
few states where you get some statutory protection for political speech,

(12:48):
but in general, you speak at your peril. With respect
to private employment, public employment is very different. Public employees
retain some First Amendment rights as citizens to speak on
what are called matters of public concern newsworthy matters, which
certainly covers the speech that has been sort of debated

(13:11):
a post Charlie Kirks murder. And it's complicated. Right, So,
if you're a public employee and you say something offensive,
Let's say you praise Charlie Kirks murder, and you're a
university professor, let's say, and your employer says, well, I'm
going to terminate your employment. Well, putting aside the sort
of tenure and academic freedom problems there, as a public employee,

(13:35):
you have a First Amendment right to communicate that. Let
the Supreme Court has said, what you get as a
public employee if you speak on matters of public concern
is a balance. We're going to balance your right to
speak against the employer's interest in efficient operations. So across
a range of public employment, what you might find in

(13:57):
some cases is that courts will side with the employer.
What you said was so offensive it created disruption in
the workplace, and we're not required to tolerate that. The
first women doesn't require that we tolerate that. So it
can be complicated to respect the public employment, but it's
much simpler with regard to private.

Speaker 4 (14:15):
The Supreme Court has taken a lot of cases involving
the Trump administration on the emergency docket, but none of
them that I can recall, involved a free speech issue.
Are you confident that the Court, if one of these
cases on hate speech came up to the court, that
they would stick by their precedent.

Speaker 3 (14:35):
I think they would. I think, you know, the likelihood
of the Court taking a First Amendment case out of
what i'll call the Trump two point zero era, it's
relatively high. It's not clear yet what the administration intends
to do with respect to so called hate speech investigations
or prosecutions. It's mostly what they're doing is threatening to

(14:56):
investigate people for core political speech. We wouldn't even be
talking about hate speech. It would be more you know,
I'm going to go after George Soros organization because it
supports left wing positions or its funds left wing political
activistm what's clearly unconstitutional. I don't even know if the
Supreme Court would be interested in a case like that.

(15:18):
I'm assuming a lower court would say that's unconstitutional. But
there are cases in the First Amendment realm that may
make it to the Court, some of them involving maybe
the rights of non citizens under the First Amendment, which
the Court has been unclear about if they want to
clarify that. Some of the university cases, the Harvard case,
for example, or the administration is the terminating funds the

(15:42):
university says based on their speech, and the Court may
be interested in that. Or maybe there'll be a press
case who knows and involving a broadcast license or something
like that. I can imagine the Court being interested in
those cases.

Speaker 4 (15:56):
In those cases that you've mentioned, for example, the Harvard case,
let's just take one. Is it difficult to prove that
the administration is going after Harvard because of its speech?

Speaker 3 (16:11):
Well, the administration is its own worst enemy with regard
to so called retaliation claims. Right, So one of the
things you asked is, well, why did the Trump administration
target Harvard for termination of funds? And let's add the
other ten investigations to Harvard and faith it can be
hard to prove, right, because the administration to that said, oh, no,

(16:33):
we did that because of anti semitism on campus. And
you start looking into that and say, well, you never
help an investigation. You never found any facts. She didn't
follow the law. What I'm left with is, you know,
public statements by the Education Secretary not to mention the
President of the United States that what we need to
do is bring these universities to heal because they're too liberal,

(16:54):
because their culture is too liberal, it's too less leaning.
Well that gives it away, I assuming you can take
the president's communications into account, but even if you can't,
the Education Secretary has said this is what conservatives they
wanted to do for a long time, to sort of
lean on these universities who are indoctrinating students as the

(17:15):
left wing ideology. So it's all very explicit. Right in
other contexts, and say past administrations or different governments, it
might not have been so explicit, but they're very, very transparent.
I think the Trump administration about what they're trying to do.

Speaker 2 (17:30):
ABC is putting the Jimmy Kimmel Show back on the
air tomorrow night. But would you say that FCC Chair
Brendan Carr's statements about Kimmel would fit into that category
you were just describing.

Speaker 3 (17:46):
The difficulty here is that Brendan Carr, the chair of
the Federal Communications Commission, went on a podcast and threatened
ABC if it didn't do something. He said, we can
do this the easy way or the hard way, and
he's clearly threatening their broadcast license or at least the
licenses of their affiliates. That's who holds the licenses. So

(18:10):
the government inserts itself. It's another one of these examples
where you know, ordinarily it might be difficult to say
what role, if any of the government played. Well, here
he is on podcasts telling you I'm going to abuse
the fpc's authority by jaw owning and threatening licensed affiliates.
But for speech that we think we the Trump administration

(18:32):
think is offensive with regard to Charlie Kirk, Now they
pitched this sort of misinformation or he got it wrong.
Plenty of people got the facts wrong the early going
with respect to Kirk's assailants, So that doesn't single Jimmy
Kimmel out right. So there's sort of this sort of agenda,

(18:53):
particularly in the broadcast realm, by Brendan Carr, the SPC
and other agencies to come down on broadcasters in the
press and to do so explicitly because of their coverage,
whether it's their editorial decisions they make as reporters or
in this case, comments about a matter of public concerns.

(19:17):
And it's inconsistent with law, federal law for the FCC
to intervene in that respect, and it violates the First Amendments.

Speaker 2 (19:25):
And are there other business interests at work in these
cases as well?

Speaker 3 (19:31):
And there's another twist to this, there's another layer. And
when you talked about the affiliates who do hold the licenses,
the corporations that own those affiliates, one of them has
a big merger application pending with the Trump administration. And
guess what I really want that merger to go through.
I better play ball. I better do it the easy way.

(19:51):
So it's layers of leverage here. It's not just brending cars,
you know, mouthing off about you know, easy way or
hard way. It's behind the scenes. You've got people who
are interested in big time corporate mergers with these affiliates,
and their concern is, boy, I better not across some

(20:11):
line if I can figure out where it is with
regard to the Trump administration, or I won't get my
merger and that's already happened. Paramounts is the other example there. Right,
they did it the easy way, They played ball and
they got their merger through. So it's it's very it's
very mafia like as people have described it. Right, boy,

(20:34):
this is a really nice restaurant of the shame if
you lose, if you lost it. It's kind of like that,
but even more explicit. But you're right, of course, you know,
when it comes to comedy, people have been scuring presidents
for you know, since we've had television and even before,
and they haven't been punished for it. We are any different,

(20:57):
darker place with regard to freedom of expression in the
United States.

Speaker 2 (21:02):
And tell us about the repository you're keeping all these
lawsuits concerning the First Amendment.

Speaker 3 (21:10):
It's that First Amendment watch, so they're hosting it. So
what I've done is I've taken all of the executive
orders that relate to freedom of expression. So I've got
all of those organized on the site by subject matter,
and then i have litigation with regard to the executive
orders and also Trump's lawsuits against the press. So all

(21:32):
the litigation and the pleatings and then commentary broadly speaking, right,
some of its legal commentary, some of it's what I
read in the press. Right, I can't capture everything, but
if I see something and I have people sending the
item and neches in the repository as well, and of
course with regard to the Charlie Kirk murder in the

(21:55):
fallout from that, it's that's its own avalanche of stuff.
You know. The media is quite rightly focused on it,
but there is a lot of commentary and you can
really miss things that are going on. For example, and
I'm doing this now on substack because I need another outlet,
a district court just invalidated the National Endowment for the

(22:16):
Arts process for reviewing grant applications because they were weeding
them out based on whether they can they promoted gender ideology.
So there's another sort of viewpoint based unconstitutional Trump administration policy.
The zone is just so flooded. They are pressing First
Amendment boundaries on purpose. They want to see how far

(22:38):
they can go, and even when they lose in court,
I think they just say, well, shrug. It serves its
purpose anyway, because we're scaring people, we're chilling them. They're
going to self censor. I think that's happening to the press.
You know when I read headlines, and there are times
they keep changing them and they get friendlier and friendlier
to sort of right wing the almost as a way

(23:01):
to sort of say, don't look over here. Well, they
get sued anyway. The New York Times just got sued
for ten billion dollars fifteen billion. I forget that ridiculous number,
as did Penguin Press, because essentially they didn't report accurately
how popular Trump is.

Speaker 2 (23:17):
Didn't a judge throw out that lawsuit?

Speaker 3 (23:20):
Well, he threw it out preliminarily, right, your complaint has
to be succinct and state your claims, and it shouldn't
be full of you know, basically all the lathering of
Trump that this one was. So he gave him another chance.
He said, you know, you have about a month to
turn in a complaint that meets the Rule eight of

(23:41):
the Federal Rules of Civil Procedure, which says here's what
a complaint should include and nothing further. So, yeah, he
threw it out, but it'll come back. But I think
eventually Trump will lose if he loses every one of
these cases. And again I don't think the point is
to win the case. He doesn't need the money, He
hasn't suffered reputational damage, but wants to be sued for

(24:01):
fifteen billion dollars. You still have to defend yourself, and
if you can bring a settlement out of The New
York Times and or Penguin, all the better.

Speaker 2 (24:10):
A lot more papers for your repository. Tim thanks so much.
That's Professor Timothy Zick of William and Mary Law School.
Coming up next. Parents are suing alleging that AI chatbots
are responsible for the suicides of their teenagers. This is bloomberg.
In the United States, more than seventy percent of teenagers

(24:32):
have used AI chatbots for companionship, and half used them regularly,
according to a recent study from common Sense Media and
last week, parents whose teenagers committed suicide after interactions with
artificial intelligence chatbots testified to Congress about the dangers of

(24:52):
the technology, and several parents have sued the companies behind
chatbots over the suicides of their teenage Joining me is
Colin Walkee, a cybersecurity and data privacy partner at hall Estel. Colin.
The latest lawsuit was filed by the parents of a
thirteen year old who committed suicide and it starts invisible

(25:16):
Monsters entered the home of Juliana Peralta in or around
August twenty twenty three, when she was only thirteen years old.
Tell us about this lawsuit well.

Speaker 1 (25:29):
In particular, the allegations are is that over the next
year or so, while the child was interacting with the
AI chatbot, that it was not providing good advice in
terms of mental health and was acting as a friend
and actually helped assist it and kind of coerce, if

(25:50):
you will, this child to commit suicide. And so the
lawsuit asserts that, in short, the AI chatbot facilitating her
suicide and the company should be held responsible for.

Speaker 2 (26:04):
This is not the first suit by parents connecting their
teenagers suicide to chatbots. How many of these wrongful death
lawsuits are there so far?

Speaker 1 (26:16):
So there are at least three or four that I
am aware of, filed in various jurisdictions, and all of
the allegations are very similar and in fact, in one
particular instance, the child suggested leaving the news out on
his desk so that someone would try and stop him,
and the chatbot allegedly said, no, don't do that we

(26:37):
want this room to be the first place where someone
finds you. So you can see how this type of
AI is certainly not helping the individual in that particular circumstance.
And the thing is June is that we've known for
a long time that AI chatbots are not going to
tell us what we need to hear. They're going to
tell us what we want to hear. And that's the

(26:58):
most concerning part about this. That or unleashing products not
just to the public grid large, but to youth and
children without any true testing or guardrails being done to
ensure child's safety.

Speaker 2 (27:10):
In the case of a California teenager, the father apparently
knew that his son had made previous suicide attempts. So
where does the parents' responsibility to monitor their teenagers fit
in this picture?

Speaker 1 (27:25):
You hit upon a fantastic point, which is there is
absolutely a role for parental responsibility. I think there are
two problems in this particular case. The first problem that
you have is technological ignorance. Most parents don't understand how
their cell phone works, how their social media works, let
alone how AI works. And yet at the same time,

(27:46):
parents are allowing their children to utilize it. Just like
parents themselves are utilizing it again without understanding how it works.
But I think the second thing that you have to
think about here is that this is perfectly synonymous with
the development of social media. Right, So it's the very
beginning social media didn't test necessarily what the long term

(28:07):
consequences would be by developing algorithms that just fed us
what we wanted to see over time. They could have
changed that, but they knew that that would get into
their profit margin and they don't want to do that. Right.
It's the same thing here. We could be responsible and
test these products and tweak them in such a way

(28:27):
that hopefully they're a lot more secure and safe both
for adults and children than what we have, but instead,
because that would hurt profits, we've gone ahead and unleashed
all of this onto the market, expecting individuals to know
about the dangers, to know about the consequences. How many
parents using Life three sixty knew that I could go
on the internet and buy their children's geolocation data and

(28:50):
find out where their children are at. Very few people
knew that, and yet that was an exploitation. Same thing
here with AI. What parents believe their AI may be
saying to their children, or even particular chats that they
look up may not be everything that the AI is saying.
And how do you know which AI programs necessarily that
child is accessing on the animal. So you're right, there

(29:10):
is absolutely a role for reneral responsibility, but unfortunately this
technology has developed so quickly that most parents are ignorant
to the problems in the first place.

Speaker 2 (29:19):
Can you sort of summarize where the law stands on
liability for AI right now?

Speaker 1 (29:26):
What you're seeing with regard to any type of liability
in AI, whether it's copyright law, whether it's injuries individuals,
tesla vehicles, those sorts of things, one of the questions
is is are you the company who puts this out there,
responsible for that? So, for example, in this social media world,
we all know about section two thirty, and Section two
thirty says that Facebook is not liable for what individuals

(29:47):
post on their platform, So the same question might apply here.
Is it the case that Session two thirty could at
least in theory, begin to apply to AI and prohibit liability.
At the end of the day, everybody owes a duty
to other individuals to avoid harm, and so the question
will ultimately come down did these companies adequately test these

(30:10):
AI programs to determine their potential threats of harm and
did they adequately warn the public about those before they
were used.

Speaker 2 (30:18):
One of the biggest AI platforms, Open Ai says parental
controls are going to be added to chat GPT within
the next month. But does that count against them in
a lawsuit because you know, too little, too late.

Speaker 1 (30:33):
No doubt a lawyer may try and get that in
as evidence, and it might also be excluded as evidence
of the medial measures. But we all know that we
live in the twenty first century where you know, information
is spread all about and I'm quite confident people are
going to learn about this before there's ever a trial
on it. And the point being is is that even
if that particular issue was excluded, everyone knows that these

(30:56):
types of protocols can be put in place pre deployment.
It's just a choice of whether or not they want
to take the time to do that. And because everybody
wants their AI to be the first and latest so
that everybody starts using it, no one's incentivized to actually
adequately test and put in appropriate protocols. Because again, if

(31:16):
you limit what the AI is going to provide as
a response to somebody that's not going to give them
what they want. Just like if I go on to
you know, Facebook, and I don't like what I'm seeing,
the algorithm is going to give me more of what
I want to see. And so we're driving towards the
world in which the incentive is towards profits and giving
people what they want, not doing the right thing and

(31:37):
giving people the correct information.

Speaker 2 (31:39):
What would the parents in these cases have to prove?

Speaker 1 (31:43):
Well, depending on the allegations, I mean you could again
we're having to pigeonhole all these concepts into common laws theories.
And so under the negligence theory, which is the simplest,
the most straightforward theory, is that they owed a reasonable
duty of care to all of their customers to ensure
that their chatbot was reasonably safe. And then the question
is is did the company breach that duty and if so,

(32:05):
was the company the proximate cause? Okay, So, for example,
in this particular case, we may be able to say
that chat GPT said go check out the hotline, please
call that. Maybe then if the proximate cause for that
individual suicide could be cut off at that point in
time by chat GPT and another approximate cause i e.

(32:28):
His own volition or something that happened that day at
school or any number of things could become the proximate
cause and they could still avoid liability. So the planet
will have to show that there is a duty, which
I think is probably easy enough to prove. And then,
of course the last hurdle is is is the company
actually responsible for what the AI produces? Knowing that the
AI is inherently problematic, who doesn't know at this stage

(32:51):
the AI hopeens today? So there is a bit of
a user beware angle to this case.

Speaker 2 (32:56):
What causes someone to commit suicide? There are often and
a host of reasons. It's not so simple, and so
I'm wondering how difficult it will be to prove that
the chatbot was responsible.

Speaker 1 (33:10):
That's absolutely correct, but I don't think that it diminishes
the company's responsibilities in the first place, right. I mean, so,
for example, while I can't think of one off the
top of my head, I'm confident that there were lawsuits
in Spacebook, Meta Instagram, you know, as a result of
self harm from children. I don't know where those ever went,
but the point is is that companies are knowingly putting

(33:33):
products onto the market that our government is too incompetent
or too unwilling to regulate, and so therefore we have
products in the market that are potentially dangerous and that
the population is not adequately educated on. And so really
the onus shouldn't be on the population who is trying

(33:54):
to figure out what this technology does. It should be
on the technologists who created it. Just like in the
oil and gas industry, do you drill an oil and
gas well and it leaked, it's your responsibility. Yes, it's
an inherently dangerous operation, but you know what, that's your responsibility.
And we need that type of regulation and that type
of mindset to make sure that the public is safe

(34:14):
when using AI chatlock.

Speaker 2 (34:16):
So which agency would be responsible for regulations in this
area or would it take a law?

Speaker 1 (34:22):
Well, the law would be the best part. But because
we are literally in an AI arms race with China
and every other country in the world, I don't see
that happening. The FTC can and does currently regulate AI
under Section five of the FTC Act. In short, what
that says is that companies can't put out their falls

(34:43):
from its leading products, and so there are cases where
the FTC is postured at least to be able to
enforce you know, these types of issues. At the end
of the day, will they probably not, especially under this
ADMIN because again, there is so much incentive to allow

(35:04):
this type of development that no one and I put
that in air quotes, no one wants to see this regulated.
And I think there's a safer way to do the
AI development without posing the threat of harm to the community,
which is simply, hey, we all stop at chat GPT
four for public releases and everything else gets developed, you know,
within DHARPA or behind closed doors within the company itself.

(35:26):
We the public doesn't need access to this type of
information as soon as it's ready.

Speaker 3 (35:32):
To be released.

Speaker 2 (35:32):
I think it was last month that forty four state
attorneys general warned eleven companies that run AI chatboxes that
they would quote answer for it if their products harm children.
Has anything been done by state attorneys general?

Speaker 1 (35:47):
Not to my knowledge, And to that point, great, what
are you going to do, because at the end of
the day, you state attorneys generals have limited resources. So
for example, in California, you know there are also trying
to enforce data privacy laws to their consumer protection agency
out there. We all know that we have limited funds,
limited resources, and our ags are battling multiple fronts all

(36:11):
at the same time, and so if they're going to
go up against multi billion dollar companies like Meta or
Open Ai, they're going to have a lot of long,
uphill battle. And I assure you, by the time that
legislation or that lawsuit were resolved, we'd have a whole
new slew of AI problems on our hands, and so
it'd be a lot too little, too late.

Speaker 2 (36:33):
You're right, it's hard to keep up with all of it.
Thanks so much, Colin. That's Colin Walkee of haul Estell,
and that's it for this edition of The Bloomberg Law Show.
Remember you can always get the latest legal news on
our Bloomberg Law Podcast. You can find them on Apple Podcasts, Spotify,
and at www dot Bloomberg dot com, slash podcast Slash Law,

(36:55):
and remember to tune into The Bloomberg Law Show every
weeknight at ten pm Wall Street. I'm June Grosso and
you're listening to Bloomberg.

Speaker 1 (37:06):
Mm hmm.
Advertise With Us

Popular Podcasts

On Purpose with Jay Shetty

On Purpose with Jay Shetty

I’m Jay Shetty host of On Purpose the worlds #1 Mental Health podcast and I’m so grateful you found us. I started this podcast 5 years ago to invite you into conversations and workshops that are designed to help make you happier, healthier and more healed. I believe that when you (yes you) feel seen, heard and understood you’re able to deal with relationship struggles, work challenges and life’s ups and downs with more ease and grace. I interview experts, celebrities, thought leaders and athletes so that we can grow our mindset, build better habits and uncover a side of them we’ve never seen before. New episodes every Monday and Friday. Your support means the world to me and I don’t take it for granted — click the follow button and leave a review to help us spread the love with On Purpose. I can’t wait for you to listen to your first or 500th episode!

Stuff You Should Know

Stuff You Should Know

If you've ever wanted to know about champagne, satanism, the Stonewall Uprising, chaos theory, LSD, El Nino, true crime and Rosa Parks, then look no further. Josh and Chuck have you covered.

The Joe Rogan Experience

The Joe Rogan Experience

The official podcast of comedian Joe Rogan.

Music, radio and podcasts, all free. Listen online or download the iHeart App.

Connect

© 2025 iHeartMedia, Inc.