All Episodes

April 13, 2021 61 mins

What will we do when robots and artificial intelligences break the law? In this episode of Stuff to Blow Your Mind, Robert and Joe discuss the issue of robot moral and legal agency.

Learn more about your ad-choices at https://www.iheartpodcastnetwork.com

See omnystudio.com/listener for privacy information.

Mark as Played
Transcript

Episode Transcript

Available transcripts are automatically generated. Complete accuracy is not guaranteed.
Speaker 1 (00:00):
My welcome to Stuff to Blow Your Mind, the production
of My Heart Radio. Hey you welcome to Stuff to
Blow your Mind. My name is Robert Lamb and I'm
Joe McCormick. And right before we started recording today, we
were just talking about that iconic scene and returned to

(00:22):
the Jedi where the droids are sent to the droid
torture Chamber. Do you remember there? I guess it's not
just a droid torture chamber. It's sort of like the uh,
the droid onboarding center right where the you know, R
two D two and C three PO have been given
as gifts to job of the Hut and they go
meet their new like droid boss and he's like, yeah,

(00:43):
you're a feisty little one, and he's signing them in,
but uh, he sees that R two D two is
a is a bad robot who needs a discipline, and
R two D two is confronted with these images of
robots being punished with various corporal punishments, like one is
getting stretched on a robot rack and another one is
getting its feet burned. Yes, this is a this is

(01:05):
a great scene one that that that definitely burns its
way into your your brain as a as a young viewer,
and maybe you don't think about it that much for
a long time, but uh, it's it's still in there.
It's it takes place in the bowels of Joba's palace
on tattooing, and it's um, yeah, it's like droid intake
but also droid corrections. It's there are there are a

(01:27):
number of different departments that I think are converging here
and and it it ultimately kind of raises some interesting
questions about um about ethics and punishment and in crime,
and certainly as it relates to two robots. Uh. Of course,
one thing, this important distress here is like none of
this was really intended in these scenes. This was about

(01:49):
having droids doing things that humans would be doing to
each other in other pieces of cinema, certainly things like
old pirate movies or old and bad movies or what
have you. I mean, that's kind of that's kind of
Star Wars in a nutshell, right. Uh. This whole portion
of Return of the Jedi is essentially a big pirate movie,
a big swashbuckler set in an alien location. Oh yeah,

(02:13):
the job of the Hut is a pirate captain yeah,
but something interesting occurs when you replace the humans in
these these tropic scenes with machines. Uh, and then you
think about it, you know, you think about, well, why
is that robot torturing the other? Like as if it
makes perfect sense if it's humans doing it. But then
the things that we create in our image when they're

(02:35):
doing it, Suddenly we start seeing the flaws in our reasoning.
Suddenly we start questioning, well, how does this whole system
supposed to work? And maybe this whole system doesn't work. Well. Yeah,
there are multiple levels of absurdities in the scene. One
is the idea that this robot is just sort of
like coolly telling are two that he is going to
learn some discipline. But then the image that accompanies that

(02:57):
is like, clearly just like extreme robot tor sure, like
it's something way beyond what would have to do with
with discipline in the real world. But then the other
level of absurdity is that it's robots in the scene,
but coming off of the issue of just like barbaric
pirate torture directly and more to the broader question of
robots and discipline and punishment. Uh, this is something that

(03:21):
we actually wanted to talk about today because the issue
of robot moral and legal agency is something I've been
interested in for a long time. I've talked about it,
It's come up on the show in the past and
in briefer ways, um And today I wanted to come
back and devote a full episode to the subject. I
guess actually we're gonna be talking about this for a
couple of episodes now. The question of as machines AI

(03:45):
robots become more independent and act more like agents, more
like humans do, how are we to understand their moral
and legal culpability when they do something that harms people?
And is there such a thing as robot punishment robot discipline?
Do these concepts reflect anything that's achievable in the real

(04:06):
world and practical and if so, how would any of
this work? Yeah? I think one of the most interesting
things about this topic is that it does force us
to force a face off between what robots and AI
actually are or will be, and how we think about them, indeed,
how we anthropomorphize them. Um And perhaps it might be

(04:26):
helpful to take a step back and think about something
far less advanced as a robot and think something I
think about something more like a hammer Okay, so everyone's
heard the old adage that it's a poor carpenter who
blames their tools, right, But of course we do this
all the time. Uh, the hammer slips, it hits our fingers,
and we may, at least in the heat of the moment,

(04:48):
blame the hammer for the failure. Now, we may get
over this quickly, but then again, we may decide that
the hammer truly is at fault and it should be
used less. We might also take this idea to a
number of different extremes. We might decide that the hammer
is not merely at fault but faulty, and then we're
entitled to at least a refund for its purchase. Or

(05:09):
we might decide that the hammer needs to actually be punished.
And and this, of course is ridiculous. And yet the
idea of punishing the hammer by say, putting it in
the corner, or perhaps you have an old toolbox of
shame that's just for the misbehaving tools, or maybe it's
it's less thought out and you just throw the hammer
across the yard as punishment for what it has done

(05:31):
to you. Um. Again, these are ridiculous things to do,
but the idea of doing them is not that far
and from us. Um, those of you listening, you may
have engaged in this sort of thing as well. You
might also simply throw the tool away and otherwise perfectly
good tool. Um. I know that I did this once
with a knife sharpening gadget that caused me to cut
my finger. My and my like reaction was this thing

(05:53):
has now injured me, it has drawn my blood. Uh
I am, I'm getting rid of it. It goes in
the trash. It bore malice again to me. Yeah, or
you know, I you know ultimately it's I mean, I
mean you you get into arguments about different tools, like
is this a dangerous tool? And in that case that
was my reasoning. It's like this tool is dangerous. It's
not enabling me to do what I want to do

(06:15):
without drawing blood, So it goes in the trash. Um.
But then there have been other cases where like I
had a mandolin for slicing up carrots, and um, I
like nicked my finger on it. But oh I I
nicked my fingers. Things are brutal, they can be. But
I nicked my finger, not using it but going into
the drawer for something else. So I punished it by

(06:36):
putting at the very bottom of the drawer, but I
didn't throw it away. Uh huh. So I think if
we all think, think back, you know, we have examples
of this sort of thing from our from our life.
Well sure, I mean, I'm going to talk in this
episode about some of the ways that we mindlessly apply
social rules to robots. But yeah, I think what you're
illustrating here is that you don't even have to get

(06:58):
to the robot agency stage of four people start doing that.
I mean people mindlessly, to a lesser extent, mindlessly apply
social rules and rules derived for managing human relationships to
inanimate objects with no moving parts. Yeah. Yeah, you don't
even have to get into a room ba or anything,
or you know, you can deal with the hammer, the

(07:18):
can opener. But we also you know, it's it's also
a sad fact that many pet owners will punish an
animal for a transgression transgression, but scientific evidence shows that
this tends to not actually work, at least in most
of the circumstances that it's used. Um So, you know,
even it's not merely with with tools and inanimate objects,

(07:41):
but even non human entities were liable to engage in
this kind of discipline based thinking. Now, most of the
studies I think that necessarily relate to dogs, if I
remember correctly, And there's a lot going on here that
doesn't relate directly to inanimate objects and robots, but it
illustrates how we tend to approach the punishment of other
agents and perceived agents. Well, yeah, there's a disconnect, and

(08:03):
this will be highlighted in one of the papers we're
going to talk about in in this pair of episodes,
But there's a disconnect in that punishment is often logically
characterized as serving one type of purpose, but then is
applied more like it serves another type of purpose. So
like it is logically explained as say a deterrent, right,
I mean, if you if you're talking about uh, legal

(08:25):
theories of punishment, one of the main things that people
come up with is say, well, the remedy provided by
the law is in order to punish the person who
did the bad thing in order to send a message
that people should not do this bad thing and thus
maybe discourage other people from doing something similar in the future,
or discourage the same person from doing it again. And
if it were to actually serve that purpose, it's debatable

(08:48):
in what cases it does actually serve that purpose. Maybe
sometimes it does, but that is a you know, you
could argue that's a rational, logical thing that prevents harm.
But the way punishment is actually often inflict it in
the real world seems to be more consistent with judgments
based on like emotional satisfaction of the idea of having

(09:08):
been wronged. Yeah. Yeah. And then also we get into
this area where, uh, we have a couple of different
factors encouraging traditions of discipline. Um, particularly if we look
at a parenthood, which there's some crossover between discipline and
parenthood and discipline and the criminal justice system. But uh,

(09:28):
you know, uh, not everything is going to line up
one to one here. But um, on the childhood example,
it's been argued that parents use punishment first of all,
because it's an emotional response out of anger and anger
that may be mismanaged. But then on top of this,
it's you know, something that's culturally passed down and punishment

(09:48):
may seem to work. I was reading about this in
a Psychology Today article by Michael Carson, pH d j D.
And uh, this is what they said, quote, because the
child is an bited in your presence it's easy to
think they would be inhibited in your absence. Punishment produces politeness,
not morality. Thus the inhibited, obedient child inadvertently reinforces the

(10:10):
parents punitive behavior by acting obedient. For the sorts of
parents who find obedient children reinforcing, Yeah, that raises an
interesting question. I mean, I've been mainly thinking for this
episode about about legal punishments, but like when it comes
down to parenting, that's a very different kind of thing
because both parenting in the legal system involved punishment, but

(10:32):
parenting is not subject to a legal system, right, So
there's no there is no systematized way by which justice
is administered from a parent. It's just just I mean,
I think a lot of times it's just sort of
like whatever the parent can manage to do in the moment,
because like the kids driving them crazy or something. Yeah,
usually the child can't take it to a higher court.

(10:53):
But I mean, I think you're absolutely right that whether
you're talking about discipline administered by a parent or the
justice system is whole, I'd say that both are probably
based more on tradition and philosophy and less on a
scientifically rigorous study of the most efficient ways to reduce harm.
And one of the interesting things about thinking about how

(11:14):
law could potentially be applied to harm caused by autonomous
machines is that it may help give us some insights
on ways that the justice system as it exists and
is applied to humans today tends to behave irrationally already,
like with respect to humans. Yeah, and again, this is
what's so interesting about this this paper, I mean, well,

(11:37):
the papers that we're going to discuss this topic in general, though,
is if you start, you start comparing machine possibilities to
human possibilities, and it's on one level of thought experiment
in how you would hold machines responsible, but then it
makes you rethink the way humans are held responsible. You know.
It's like, um, you know, I think you have a
pretty square away. Like if if an adult sells a

(12:00):
pack of cigarettes to someone who's underage, right, but then
one of a machine does the same thing, how do
you treat the machine? Do you treat a machine like
an adult? And then in trying to figure out how
to treat this machine, does it make you rethink how
you should be treating the adult who engaged in this behavior?
I don't know. Yeah, And I think a lot of
that will come down to our understanding of what the
machine is capable of, like what kind of constraints it has,

(12:24):
what type of what level of autonomy it seems to
be operating at. I mean, again, Weirdly, even when people
set out to define clear rules for what makes a
machine culpable, there there's still going to be a lot
of subjectivity in it. I'm looking at like legal definitions
of what constitutes a robot versus just a machine, and
some of these definitions involved things like, well, a robot

(12:47):
feels like a social agent. So there's still, like, you know,
an element of subjectivity. But I think that's correct in
how we actually apply the term most of the time, right,
Like something is like a gut feeling about how this
machine is behaving in your world. Is it acting more
like a fixed, you know, brainless machine or is it
acting a little bit more like a person. So while

(13:12):
it would be one thing if if it were basically
a cigarette vending machine that was selling to children, but
if it were a machine that went door to door
and and rang the doorbell and then asked for the
children so they could sell them cigarettes, that would be
a different matter. I mean, yeah, I mean, I think
that would require different types of remedies probably, Yeah, I
mean I think a lot of people would probably look

(13:32):
at the cigarette vending machine and say, where was the
vending machine placed? Why was it in a place that
children could have access to it? Rather than attacking the
fundamentals of the machine itself. If it's going door to
door and giving cigarettes to kids, yeah, then people are
probably going to attack the fundamentals and the moral character
of the robot. Right, you have not attacked the robot itself.

(13:53):
It would just rob justice in somebody's front yard. Yeah,
thank Alright, So I guess I want to introduce one
of the papers we're gonna be looking at in this
pair of episodes, and it is by Mark A. Limley
and Brian Casey called Remedies for Robots, published in the

(14:14):
University of Chicago Law Review in twenty nineteen. And this
is a big paper. It's like eighty something pages long
with a with a lot of different interesting, uh thoughts
in it. We're not going to be able to cover
the entire thing in depth, but it's worth looking up.
You can easily find a full PDF of it if
you want to read it in depth. And we're gonna

(14:35):
look at some of the larger framework it lays out,
and then some interesting thoughts raised to buy it, But
to kick it off here the the author's right quote,
what happens when artificially intelligent robots misbehave? The question is
not just hypothetical. As robotics and artificial intelligence systems increasingly
integrate into our society, they will do bad things. We

(14:58):
seek to explore what remedies the all can and should
provide once a robot has caused harm. Now, obviously we're
going to be focused less on the like minute particulars
of US legal precedent here and more on the broader
issues they raise about robot agency, robot moral decision making
and how that interacts with harm and morality and justice.

(15:20):
And the authors start out in their introduction by giving
what I think is a really fantastic example of how
an autonomous robot with behaviors guided by machine learning, which
is how you know increasingly most robots are going to
be controlled, can end up doing things that are the
exact opposite of what was intended. So this case that

(15:41):
they cite is based on a true story from a
presentation at the eleventh Annual Stanford e Commerce Best Practices
Conference in June, and it goes like this quote. Engineers
training and artificially intelligent self flying drone were perplexed. They
were trying to get the drone to stay within a

(16:01):
predefined circle and head toward its center. Things were going
well for a while. The drone received positive reinforcement for
its successful flights, and it was improving its ability to
navigate toward the middle quickly and accurately. Then suddenly things changed.
When the drone near the edge of the circle, it
would inexplicably turn away from the center, leaving the circle.

(16:25):
What went wrong. After a long time spent puzzling over
the problem, the designers realized that whenever the drone left
the circle during tests they had turned it off. Someone
would then pick it up and carry it back into
the circle to start again. From this pattern, the drones
algorithm had learned correctly that when it was sufficiently far

(16:46):
from the center, the optimal way to get back to
the middle was to simply leave it all together. As
far as the drone was concerned, it had discovered a
worm hole. Somehow, flying outside of the circle could be
led upon to magically teleport it closer to the center,
and far from violating the rules instilled in it by
its engineers, the drone had actually followed them to a t.

(17:10):
In doing so, however, it had discovered an unforeseen shortcut,
one that subverted its designers true intent. That's really good.
That's that's it's as Yes, I love it. This is
such a great example of how robots can fail in
ways that are perfectly logical for the machines themselves, but
hard for humans to predict in advance, because we're not

(17:33):
understanding how our you know, our programming or the data
sets were training it on is biasing its behavior in
ways that that are strange to us. And in this case,
of course, such a malfunction is harmless. But as autonomous
machines become more and more integrated into the broader culture,
not just in controlled contained locations like factory floors and laboratories,

(17:56):
but in the wild, so on the streets and in
our homes and stuff, there will inevitably be cases where
robots fail like this and fail in ways that cause
catastrophic harm to people. Yeah, and plus, as an aside,
we we have to realize that even in cases where
the machines have not failed, there will be gray areas

(18:17):
in which it's not completely clear, and an argument could
be made in these cases for machine culpability with a
variety of intense and possible biases in place. Oh yeah,
that's another thing these authors talk about that there can
be all kinds of ways that that robotics and AI
could end up causing extreme harm to people without ever
doing anything that if a human did it would be illegal.

(18:40):
What one example they give is like if, um, if
Google were to suddenly change its Google Maps algorithm so
that it routed all of the city's traffic through your neighborhood. Like,
nothing illegal about that. It doesn't like commit a crime
against you, but this is going to drastically negatively in
hacked your quality of life. And it's a decision that's

(19:02):
just like a could be a quirk of an algorithm
in a machine. Now, this paper in particular concerns the
legal concept of remedies. So I was reading about remedies.
A common legal definition that I found is quote the
means to achieve justice in any matter in which legal
rights are involved. Or in the words of Limely and Casey,

(19:24):
what do I get when I win? Right? So, if
you if you take somebody to court because you say
they have harmed you. Whatever outcome you're seeking from that
that court case is the remedy. So usually when a
court case finds that somebody has done something wrong to
harm somebody else, the court responds to the finding of
guilt or blame by enforcing this remedy. And common remedies

(19:46):
would include a payment of money, right a guilty defendant
has to pay money to the plaintiff, a punishment of
the offender like maybe they go to jail, or a
court order to do something or not to do something.
For example, somebody is ordered not to drive a vehicle,
or they are ordered not to go within a hundred
feet of somebody else or something like that. Yeah, or

(20:07):
their their eyeball is removed, or they have to spend
a night in a haunted house something like that. Hopefully
not in modern law. But wait a minute, there are
some sometimes you do read about some really strange like
remedies that are ordered by judges like I order you
to I don't I don't know, to wear chicken suit
or something right like, Yeah, there's some judges who like
to get creative. It seems weird. Yeah, I wonder I

(20:30):
have there have been in cases where someone has to
spend a night in a haunted house due to a
court order. I think that would be a good setup
for a film. But anyway, so when you start looking
at the idea of remedies, remedies are complicated because they
involve different types of implied satisfaction on behalf of the
victim or plaintiff. And some are very clear and material,
and others are much more abstract. So the very the

(20:52):
ones that are very clear and material are like, if
I hit your car with my car and I'm clearly
at fault, I need to give you a payment of
cash to offset the material losses to the value of
your car, right. But then other times it's it's more abstract.
It's you know, punishment of an offender to give the
victim a sense of justice or to allegedly discourage someone

(21:15):
from committing this type of harm or offense in the future.
And then the authors right that things get way more
complicated when you bring robots and AI into the picture.
For example, if you're trying to give a court order
to a person you know saying like you shall not
drive a car, you shall not you know, come within
a hundred feet of this person. You can do so
in natural language, you can like speak a sentence to

(21:37):
them and you can expect them to understand. But how
do you get a court to give an order to
a robot not to do something? Most robots don't have
natural language processing, and even if they do, a lot
of times it's not that good. So you might think, okay,
well you just you know, you give the court order
to the robots programmer and then it and then they'll

(22:00):
have to program the robot to obey. But this is
also really complicated, like whose responsibility is it the robots
current owner or the original contractor or creator who made
the robot? Uh? And what if this is like an
end user consumer device that the owner doesn't have any
ability to reprogram, or what if, in the case of

(22:21):
robots whose behavior is driven by machine learning or some
other kind of system that is for practical purposes, a
black box, what if it's not even clear how you
could reprogram it to reliably obey the rule. Yeah, because
there's a chance you got to this position because the
robot misinterpreted what was asked of it. So if you

(22:45):
then make additional requirements ones that maybe you know, haven't
actually been tested before, but are just you know that
that are then brought on by the court that could
conceivably create new problems, right yeah, yeah, totally, and and
it keeps getting even more comp located from there like
Limle and Casey right quote. To complicate matters further, some systems,
including many self driving cars, distribute responsibility for their robots

(23:10):
between both designers and downstream operators. For systems of this kind,
it has already proven extremely difficult to allocate responsibility when
accidents inevitably occur. It just seems like a real, real
fast way to get into skynet territory, where it's like
the robot then decides that the only way to assure
that it never sells cigarettes to children again is to

(23:33):
destroy all humans. That sounds like finding a wormhole to me.
We will be getting into some more wormhole territory as
we go on, so more complications. Uh. The authors bring
up the idea of how to courts compel a person
or a company to obey a court order. Right Like,
if you know a company is like dumping poison that's
harming somebody, and the person sues that company, what does

(23:55):
the court do to get them to stop, Well, the
there is a threat of contempt of court if they
don't stop doing it. Right, Courts usually just assume that
people are motivated by a desire not to pay huge
monetary damages or a desire not to go to jail.
Would that have any motivating power on a robot. It
would only have that power to the extent that the

(24:16):
robot had been programmed to take that into account. If
it hadn't, it wouldn't matter at all. Like, you know,
most robots probably do not have any opinion one way
or another about whether about going to jail or having
to pay damages, So you'd have to explicitly program it
to be disincentivized by potential punishments. Yeah, because take the
cigarette robot for example, Like it's prime it's prime directive

(24:39):
is just to sell delicious cigarettes to human beings like
the the what what else? What kind of leverage do
you have? Right exactly, So in that case, you'd be
faced with either you'd be trying to find some kind
of human who's responsible for its behavior, but you could
very well run into the problem that like you can't
really identify any one person who's seems to be at

(25:00):
fault for what it did, and it's doing this bad thing,
so so what are you going to do about it?
And then of course things get even weirder when you
start getting into that that other side. You know, that's
like the more like direct and material remedies that can
be provided by courts, either like a monetary award to
the victim or in order to stop doing something that

(25:22):
causes harm. On the other hand, you've got this thing
that courts often end up engaging in, and people are
are largely driven and motivated by however, however irrational it
might be in some cases, and that's the perceived abstract
value of punishment, you know, not just material damages to
a victim or in order not to do something, but
the inflicting of punishments specifically to demonstrate the court's displeasure

(25:45):
with the original behavior of the defendant. Uh so uh
They raise a question that's brought up in a paper
by a professor named Christina Mulligan who explores the subject
of should you have the right to punch a robot
that hurts you? Uh lee? In case? He called the
called this the expressive component of remedies, and though a
desire to see offenders punished. Maybe an extremely natural and

(26:09):
nearly universal human drive. It's debatable whether it actually serves
a purpose in reducing harm, and if it does, in
what cases it does. I I love this idea because,
on a very literal level, it makes me think, well,
why would you punch a robot. They're made out of
out of metal. You're gonna hurt your hand. All you're
gonna do is hurt your hand, and you're not going

(26:29):
to hurt the robot unless first of all, you design
the robot so that it has at least one punchable
portion of its anatomy, and then for it to be
more than just a you know, a cathartic uh uh
thing for you, then you have to also make sure
there's some sort of feedback right where punch. Yeah, like

(26:51):
you punch cigarette bought in it's punchable area, then it
will say owl, and maybe it will I don't know,
ottaw inciner rate one packet of reigrettes so that it
can never sell them that sort of thing. But then, yeah,
you're having to design your robots to to suffer to
a certain extent, which I guess that means that goes
back to what C three po said, right, he said,

(27:12):
you know about about being made to suffer. It seems
to be our lot in life. Oh, that's interesting. I
hadn't thought about that. Yeah. Clearly R two, D two
and C three p O have have inherent desires to
avoid pain. They have been programmed with that. Yeah, but
as we've said that, that's not standard issue for robots.
Most robots don't care about whether or not they get injured, Like,
that's not a motivating factor for them. And again it

(27:35):
raises this bizarre question of like, what are you doing
when you punch the robot? Like, what is the I
guess it's making you feel better, But does it make
you feel better if like you know that the robot
doesn't actually care? Yeah, and then what then what needs
to be done to convince you that it does? K Yeah,
it just gets very sticky, very quickly. And then of

(27:55):
course turns the mirror back on the way we handle
human to humans in areas. Right, but anyway, limly in
casey I guess to to summarize their position, they say, Okay,
increasingly independent robots and AI are coming. They're they're infiltrating
more and more into society, and they will inevitably do
bad things. When that happens, the legal system will try

(28:18):
to order remedies to make things right in you know,
when when harm has been caused. Our current legal understanding
of remedies is based on the assumption of human agents
and human agents only, and its rules are not suited
to dealing with robot crime or robot offenses quote. As
we have shown, failing to recognize those differences could result

(28:39):
in significant unintended consequences, inadvertently encouraging the wrong behaviors or
even rendering our most important remedial mechanisms functionally irrelevant. Uh So,
to take robot agents into account, we're going to have
to examine and rethink how our systems of remedies work.
But and this is a point we've been making all ready,

(29:00):
this could have multiple benefits because it could also lead
to a better understanding of how we apply these remedies
to cases dealing exclusively with humans quote. Indeed, one of
the most pressing challenges raised by the technology is its
tendency to reveal the trade offs between sidal economic and
legal values that many of us today make without deeply

(29:21):
appreciating the downstream consequences. They write, we need a law
of remedies for robots, but in the final analysis, remedies
for robots may also end up being remedies for all
of us. Now, like I said, this is a very
long paper. We can't do justice to all of the
subjects they raise, but to focus on some highlights, I
thought one interesting place to look was when they try

(29:43):
to get into the definition of what actually makes a
robot in the legal sense. Obviously, there's going to be
some difficulty here because think about how differently the term
is used and how many different things it's applied to
in the world. Uh. The authors here is cite a
professor Ryan Hallo, who in the past had written that
there are three important characteristics that define a robot and

(30:07):
make it different from any machine, just like a computer
or phone. And Callo says that these three uh, these
three qualities are embodiment, emergence, and social valence. So to
quote from Calo, robotics combines, arguably for the first time,
the promiscuity of information with the embodied capacity to do

(30:28):
physical harm. Robots display increasingly emergent behavior, permitting the technology
to accomplish both useful and unfortunate tasks in unexpected ways.
I like that idea of unfortunate tasks. Um and robots,
more so than any technology and history, feel to us
like social actors, a tendency so strong that soldiers sometimes

(30:50):
jeopardize themselves to preserve the lives of military robots in
the field, and lives is in quotes there. Yeah, you
may remember this from the film it came out a
few years back, Saving Private Cigarette Robot. It's quite touching.
I mean, it seems absurd, but it does seem to
play on our natural biases. I want to talk about

(31:11):
a couple of examples from a psychology paper in a second,
but like, uh, we're we're just so ready to look
at machines like humans and and treat them as such.
It seems almost impossible to avoid. But anyway to pick
up with Limly and Casey after after that callo quote
they say quote. In light of these qualities, Calo argues
that robots are best thought of as artificial objects or

(31:34):
systems that sense, process, and act upon the world to
at least some degree. Thus, a robot in the strongest,
fullest sense of the term, exists in the world as
a corporeal object with the capacity to exert itself physically. Though,
It's interesting to me that even this attempt to give
a strict and legally useful definition of a robot includes

(31:58):
a subjective component. I brought this up earlier, the component
about human feelings, the social valence criterion. The callos sites here.
This means they feel to us like social actors. Yeah,
Like I was wondering in all of this, like, where
does a particularly malicious robo call fit into the scenario? Say,

(32:18):
a robo call that is not just about trying to
sell you something, but it's like, you know, actively trying
to say, get a credit card number out of you
for nefarious purposes. Yeah, that's a really good point. And
and along those lines, Limbly and Casey argue that actually,
they don't think the embodiment criteria of hardware is necessarily
a good one. That maybe our concept of a robot

(32:41):
should be less limited to the essentialist quality of being
embodied and more just apply to anything that exhibits intelligent behavior.
And exactly things like that robot call would be would
be a good example. Uh, the things we think of
as robots probably do. They They're not just like stand
alone objects. They interact with the broader world in some way,

(33:02):
but they could be entirely software based. Yeah, I guess certainly.
The roomba is a great example, or any kind of
like vacuuming robot where it's it's it's it's in your house,
or it's in a room in your house. It's it's
interacting in your environment and it's essentially making decisions about
how best to move around that space. Sure, but if

(33:22):
you want to take it out of the embodied space,
you could have the idea of bots on the Internet.
There's things out there acting autonomously to some extent and doing,
you know, executing some behavior, acting almost maliciously. But we
we are tempted to call them bots meaning short for robots,
because they have some kind of apparent independent agency and

(33:43):
they're doing something that seems at least halfway intelligent. Right, Yeah,
And you can easily imagine how they could they are
and well they I mean, they are used maliciously in
some cases, but how something like a social media bot
that responds to certain comments in a particular way, Like,
it's very easy to imagine how you how that could
be utilized in a way that would be like not

(34:04):
only annoying, but just that outright harmful, even physically harmful.
Oh yeah, I mean think about some of these, uh say,
like bots on social media that try to crowdsource information,
like during a natural disaster or something like that. You
could imagine uh, intentionally maliciously manipulating a bot of this
kind to like have you know, bad information on it

(34:26):
or something. Yeah, yeah, or you know, anything that a
troll can do on social media, a bought could conceivably
do as well. So that just you know, opens up
the door, right. But coming back to this, so there's
this interesting idea that robots feel to us like social actors,
and that seems to be, at least by some people's definitions,

(34:47):
a kind of inextricable quality of what makes a robot
like it feels like, at least to a small extent,
like a person somehow. And it reminds me of the
psychology paper I was looking at just recently on human
social interaction with robots that is by Elizabeth Broadbent called
Interactions with Robots The Truths We Reveal about Ourselves, published

(35:09):
in the Annual Review of Psychology in seventeen. Uh. This
was a highly cited paper and it seems to be
it's a big literature review of a lot of different
stuff about about how humans interact emotionally and socially with robots,
And the one section I was thinking about was where
she reviews a bunch of other studies about how we
mindlessly apply social rules to robots. So there are a

(35:33):
ton of different examples, but just to cite a couple
of them, one she writes up his quote. After using
a computer, people evaluate its performance more highly if the
same computer delivers the rating scale then if another computer
delivers the rating scale, or if they rate it with
pen and paper. So like, if you know, you get

(35:53):
a thing at the end of a test that says like, hey,
you know, how did you enjoy interacting with this machine,
You're more likely to give it a higher score if
you're still sitting on the same machine. Or at least
that was what was found by nas at all in
nine UH and the and Broadbent rights quote. This result
is similar to experiment or bias, in which people try

(36:15):
not to offend a human researcher. Another example of social
behavior is reciprocity. We help others who help us. People
helped to computer with a task for more time and
more accurately if the computer first helped them with a task,
then if it did not, and this was found by
Fog and nas In. I love that idea of people,

(36:37):
you know, being more reluctant to rate a computer, uh
poorly if they're still interacting with the same computer. That
that's that seems perfectly true to me. But another interesting
one from the summary is quote research and psychology has
shown that the presence of an observer can increase people's honesty,
but incentives for cheating can reduce honesty, and this is

(36:59):
found by Covey at all in nineteen eighty nine. In
a robot version of this work, participants given incentives to
cheat were shown to be less honest when alone compared
to when they were accompanied by either a human or
by a simple robot, and that was found by Hoffman
at all in. This illustrates that the social presence of

(37:20):
robots may make people feel as though they're being watched
and increase their honesty in an effect similar to that
produced by the presence of humans. Now this is inter right.
This This also reminds me of various studies that have
gone into sort of the idea of of imagine beings
or religious beings watching us while we're doing things right,

(37:42):
or even just like I imagery, like putting some eyes
imagery on a wall looking at people while they're like,
I don't know, not supposed to steal from the collection
plate or something like that. I don't know if it's
to the same extent, but at least in the same
direction that the presence of another human is. You know,
your You might be a little bit worried that are
who D two is gonna, you know, judge your moral

(38:02):
character harshly or tattle on you. I'm not as worried
about our two, but um three po snitch than Coming
back to Limele and Casey, so they talked for a
long time about how robots get their intelligence. They talked

(38:22):
about the importance of machine learning for the modern generations
of robots and AI, that it's just not practical to
hard code AI the way we used to imagine. You know,
you'd be a programmer and you're just like creating a
lot of strings of if then statements like you know,
the kind of intelligence that we expect from a modern
AI or or intelligent robot is too complex for people

(38:44):
to program in a in a direct way like that. Instead,
they've got to be trained on natural data sets through
machine learning, but of course doing so comes at the
cost of increasing uncertainty about their future behaviors. Behaviors could
emerge that a conchi and just programmer would never intentionally
hard code into the system. Uh So, so that brings

(39:06):
us to like, what types of harms could we expect
from robots and AI? And the authors here come up
with what I think are some very useful categories, some
sort of like cubby holes to slot the different types
of AI fears into. So the first kind is what
they call unavoidable harms. These are probably not the main
ones to be worried about, but they are worth thinking about.

(39:26):
Uh And this is just the fact that some dangers
are inherent too many products and services, we just accept
them as the cost of having those products and services
in the first place. So like this would just be
cigarette bought just by virtue of selling cigarettes is doing
harm to people? Right? Yes, I mean, yeah, the fact
that you have cigarettes. There is some harm coming from that.

(39:47):
But there are also ones that are more fully integrated
into just the way society works, like having cars. It
is absolutely inevitable that people driving cars are going to
crash their cars and there will be FATA lalities from that,
and you can think of ways of reducing it, but
there's there's really not any expectation that we can have
a country that has car based transportation and there will

(40:10):
not be any accidents because there will always be things
that are that are not even reducible to driver error
or to malfunction of the cars, right, like a tree
falls on the road or something, right, birds, wild animals.
Any So, even though they're I think there's some very
convincing arguments to be made that uh a switch to
self driving cars would create a much safer uh travel environment,

(40:33):
that it would make roads safer. You're not gonna you're
not gonna get to absolute zero crashes or absolute zero
road fatalities, right, I mean you wouldn't even if the
driving algorithms were perfect, right, and they're probably not going
to be perfect. They may well be and probably are
going to be better than the average human driver. Yeah, Okay,
So that's just there's just unavoidable harm that comes from

(40:55):
using any type of product or service, and when you
integrate robotics in AI and of that product or service,
those unavoidable harms will just continue. But that's something we
already deal with. The next category is deliberate least cost harms.
This is similar to unavoidable harms, but it's in cases
where the machine actually is able to make a decision

(41:16):
with with important ramifications, like it can make a decision
to act in a way that causes harm, but is
attempting to cause the least harm possible. So, in a sense,
this is forcing robots to do the trolley problem. Right,
do you switch to the track that has one person
sitting on the train tracks instead of five people? And

(41:36):
this will be another inevitable capability of autonomous cars, but
it raises all kinds of thorny questions. If an autonomous
vehicle can avoid a head on collision that will likely
kill multiple people by suddenly swerving out of the way
and hitting one pedestrian, that may indeed avoid a greater harm,
but that's probably cold comfort to the one person who

(41:58):
got hit. Right, Yeah, and then when you you have
a robot or some sort of an AI involved in
that decision making, I mean, it's it's you can just
imagine the the the intensity of the arguments and the
conversations that would ensue. Right, But then the authors raised
what I think is a very interesting point. They say

(42:19):
that this kind of life or death trolley problem will
probably be the exception rather than the rule. Instead, they say, quote, uh,
far likelier, I'll build albeit subtler scenarios involving least cost
harms will involve robots that make decisions with seemingly trivial
implications at an individual level, but which result in non

(42:40):
trivial impacts at scale. Self driving cars, for example, will
rarely face a stark choice between killing a child or
killing two elderly people, but thousands of times a day
they will have to choose precisely where to change lanes,
how closely to trail another vehicle, when to accelerate on
a freeway on ramp, and so forth. Each of these

(43:02):
decisions will entail some probability of injuring someone, I guess.
Another thing to keep in mind, like with the trap,
with the trolley problem, generally, when you're dealing with it,
there's a there's a lot of emphasis on the problem
aspect of it, you know, like the trolley problem should
be a an ethical dilemma. It should it should hurt
a bit to try and figure out how of what

(43:25):
to do, and the idea of the trolley problem being
something that is encountered and decided upon, like as in
a split second, by a machine, um, by an algorithm
like that. That feels that that feels a bit worse
to us. You know, that feels like if if it's
if it's an easy decision, even if it's just based
purely on math, you know, it's um, it feels wrong

(43:47):
on some level. Oh yeah, yeah, so I think you're right.
But also the thing they're bringing up here is that
the trolley problem you're actually more often facing is that
every single day you're your autonomous car is going to
make you know, hundreds or thousands of trolley problem calls
where on one track it is getting to your destination

(44:08):
a few seconds faster, and on the other track is
a one in a million chance of killing somebody. Yeah yeah,
and these do. We make these decisions all the time,
but we don't focus on these. That's but I think
that's part of the issue, you know, Yeah, exactly, You're like,
should I, okay, should I take a left on this road? Well,
there's a chance there's a speeding car just above, just
over the edge there, and I can't see it, but

(44:29):
I'm going to take that chance because I want to
cut three minutes off my drive to work. Yes. Uh,
this is actually a very good point that we we
already make these decisions, but we just don't think about
them in these explicit probability calculations, and there may be
some consequences to thinking about them this way, which is
there could be a weird like perceived downside just to

(44:49):
making these kind of these kind of calculations objective and
and explicit. Yeah, I mean I've I've run into this
with some of the map programs that I used to
drive before, where I want to tell it in some
cases like like give me the ability and maybe they
have this now, but there was one left turn in
particular where I would the ability to to flag this
left turn. This is a dangerous left turn. You have

(45:12):
put me in a position to make. Um, I might
know which left turn you're talking about. If you probably
in town, it's it's it's in town. It's near our office.
So yeah, I mean yeah, by the way, if you're
out there working on programming driving apps, you should absolutely
include the A toggle key where you can say no
left turns please. Yes, that that is highly useful. I'm

(45:33):
to understand this is how one of my aunts got around. Like,
if they got older and they were less adventurous driving,
they would only take right turns, and they would do
all their driving so that no left turns were made.
I think I have one time read This could be
totally wrong, but I at least one time I remember
reading a claim that, like, you know, the traffic efficiency

(45:53):
would be x percent higher and people would spend x
number of minutes less time in traffic if there were
no such thing as left turns, if everybody had to
get everywhere by only doing you know, full right turns
to to go around the block. Interesting. I'm sure there
would be some cases where you can't do that, but
you know, in a in a grid city, seems to
make a lot of sense. Maybe you get like one

(46:14):
left turn of day. It's some sort of card system.
But like I said, I I cannot confirm that. Okay,
But anyway, the next categories of harm they talk about
this one is defect driven harms. Uh, this one is
very easy to understand. The robot harms someone because of
a design flaw or a bug or a mistake, or
it's just broken. You know, a warehouse loading robot is

(46:35):
designed to only operate when no humans are nearby it.
But there's a malfunction with one of its sensors and
it fails to detect the presence of a human operator
trying to get I don't know, a piece of junk
out of it out of one of its hinges, and
it moves and kills them. Okay, this is pretty straightforward.
Just it's broken for some reason. The authors here do
point out that this gets even more complicated when there

(46:57):
is a human in the loop e g. An autonomous
car with a human driver who is supposed to intervene
in the event of an emergency. They talked about one
case where this happened with with I believe it was
an uber autonomous vehicle where both the machine and the
human fail, that both of them failed to stop a

(47:18):
collision that hurts someone, Like what happens here? Yeah, yeah,
of course, we we have very similar cases in just
purely human affairs. Right when questions are asked, like where
was this person's supervisor? Uh, you know, who who were
the watchers? Who? There should have been some other person,
there was someone else in the loop here, Why didn't
they do something to stop this crime from taking place? Right? Okay?

(47:39):
After that you get into misuse harms. Now, some of
these are very obvious, very straightforward, like if you program
a robot directly to go kill someone, or even if
you program it to wander around at random swinging a machete.
In these cases, it seems that the human programmer is
clearly at fault, Right, the robot has just become a
weapon of murder or of reckless endangerment, and the person

(48:02):
who told it to do that is the person responsible. Yeah.
Like if you take an automotive, uh, like oil changed
robot and you reprogram it to um do appendectomies and
people die as a result, Like, that's a misuse you
can you can only blame the the oil changed robots
so much because it was not ultimately designed to perform appendectomies. Right.

(48:26):
In this case, this is more like the hammer example
used at the beginning. This it's not the robot autonomously
making the decision to do this. Uh, this is somebody
just using it as a tool of crime. Yeah, But
the authors point out that there are cases where quote,
people will misuse robots in a manner that is neither
negligent nor criminal, but nevertheless threatens to harm others, and

(48:50):
these types of harm are especially difficult to predict and prevent.
So one example is just people love to trick robots,
people like to mess around with robots and AI. I
would admit to myself finding this amusing and principle we've
talked about this in uh, you know, the flate of
sex Mokena episodes. But of course there are times when

(49:10):
it's not so funny, when when people take it to
really sinister places. One example the authors bring up here
is the horrible saga of Microsoft ta Do you remember
this thing? Oh? This was the this is the robot
that was traveling across the country. No, no, no, no,
uh though I know what you're talking about there. No,
maybe we can come back to that. But Tay was

(49:31):
a Twitter chat bot created by Microsoft that was supposed
to learn how to interact on the Internet just by
learning from conversations it had with real users. So you
could tweet it Tay and say hey, how are you doing?
You know, and you could talk about the weather or whatever.
But of course who who ended up engaging and training

(49:51):
this AI to speak? It was like the worst trolls
on the internet. So within a matter of hours, this
brand new chat bot had been transformed from from a
from a you know, a lump of clay unformed into
a pornographic nazi. Yes, I do remember this now. And
this kind of just gets you thinking about the ways
that people will be will be able to misuse robots

(50:13):
in ways that guide their behavior in extremely pernicious directions,
sometimes without the people guiding this misuse necessarily committing any
kind of identifiable crime. Yeah, Like people are going to
look for exploits, They're gonna look for ways, they're gonna
look for cracks in the system. It's it's like within
with any kind of like a video game system. You know,

(50:34):
people are just gonna see what they can get away
with and and just engage in that kind of action,
sometimes just for the fun of it, right, And sometimes
that's harmless, but sometimes that's really awful. Yeah, Okay. Next
category is unforeseen harms. And here's where we start getting
into the really really interesting and really difficult cases, types

(50:54):
of harm that are not unavoidable, not a product of
defects or miss use, but are still not predicted by creators. Uh.
And so the authors talk about how, in a way,
unpredictability is what makes AI potentially useful, right like, it
can potentially arrive at solutions that humans wouldn't have predicted.

(51:16):
But sometimes it does so in ways that really miss
the boat and could be extremely harmful if they were
embodied in action in the real world. Uh, similar to
the drone example from the circle that we talked about
at the beginning. But they signed another fantastic example here.
That's kind of chilling. So I'm just gonna read from
Limle and Casey here. In the ninety nineties, a pioneering

(51:38):
multi institutional study sought to use machine learning techniques to
predict health related risks prior to hospitalization. After ingesting an
enormous quantity of data covering patients with pneumonia, the system
learned the rule has asthma X delivers lower risk X.

(51:58):
The colloquial translation is patients with pneumonia who have a
history of asthma have a lower risk of dying from
pneumonia than the general population. The machine derived rule was curious,
to say the least, far from being protective, asthma can
seriously complicate pulmonary illnesses, including pneumonia. Perplexed by this counterintuitive result,

(52:19):
the researchers dug deeper, and what they found was troubling.
They discovered that quote, patients with the history of asthma
who presented with pneumonia usually were admitted not only to
the hospital but directly to the i c U, the
intensive care unit. Once in the i c U, asthmatic
pneumonia patients went on to receive more aggressive care, thereby

(52:41):
raising their survival rates compared to the general population. The rule,
in other words, reflected a genuine pattern in the data,
but the machine had confused correlation with causation quote, incorrectly
learning that asthma lowers risk when in fact, asthmatics have
much high fire risk. It seems like we've got another

(53:02):
wormhole here. And here the authors introduce an idea of
of a curve of outcomes that they call a leptokurtic curve.
That's a strange term, but basically what that means is
if you are um, if you're charting what types of
outcomes you expect from a traditional system like just you know,

(53:23):
humans looking at data versus a a complex automated system,
the sort of the tails of the graph with the
complex automated system will tend to be fatter, meaning you
get more extreme events in the positive and negative space,
rather than a you know, a sort of rounder clustering

(53:43):
of events in the you know, normal operation space, if
that makes any sense. So, these kinds of unforeseen harms
are some of the most worrisome types of things to
expect coming out of robots and AI. But then the
other one would be systemic harms And this is the
last category of of harms they talk about. UH the
author's right quote. People have long assumed that robots are

(54:06):
inherently neutral and objective, given that robots simply intake data
and systematically output results. But they are actually neither. Robots
are only as neutral as the data they're fed, and
only as objective as the design choices of those who
create them. When either bias or subjectivity infiltrates the systems

(54:26):
inputs or design choices, it is inevitably reflected in the
system's outputs. This is your classic garbage in, garbage out, problem, right,
They go on. Accordingly, those responsible for overseeing the deployment
of robots must anticipate the possibility that algorithmically biased applications
will cause harms of this systemic nature to third parties.

(54:48):
So UH, an example that's much discussed in this would
be an AI trained to make decisions about granting loans
by studying patterns of which loan applicants got their loans
granted in the past. And a I like this could
end up manifesting some type of bias that hurts people,
like a racial bias in its loan assessments, because there

(55:10):
was already a bias in the real world data set
that it was trained on. So, in other words, AI
that is trained on data from the real world, unless
it is it is explicitly told not to do this,
it will tend to reproduce and perpetuate any injustices, any
inequalities that already exist. And the authors here give an

(55:30):
example that is based on algorithmically derived insurance premiums that
I think they're talking about auto insurance quote. A recent
study by Consumer Reports found that contemporary premiums depended less
on driving habits and increasingly on socioeconomic factors, including an
individual's credit score. After analyzing two billion car insurance price

(55:54):
quotes across approximately seven hundred companies, the study found that
credit scores actored into insurance algorithms so heavily that perfect
drivers with low credit scores often paid substantially more than
terrible drivers with high scores. The studies findings raised widespread
concerns that AI systems used to generate these quotes could

(56:15):
create negative feedback loops that are hard to break. According
to one expert, quote, higher insurance prices for low income
people can translate to higher debt and plummeting credit scores,
which can mean reduce job prospects, which allows debt to
pile up, credit scores to sink lower, and insurance rates
to increase in a vicious cycle. Uh So this is

(56:37):
kind of a nightmare scenario, right, Like an AI that
is too powerful and not explicitly protected against acquiring these
types of biases could create these kind of computer enforced
prisons in reality, like a machine code for perpetuating whatever
state of the world, like whatever state the world was
in when the AI was first deployed, and then entrenching

(57:01):
it further and further. Yeah, and that kind of thing
is especially scary because, like, if there's a human making
the decision, you can you can call up the human
to a witness stand or ask them like, hey, why
did you make the decision this way? But if it's
an AI doing it, you could say like, hey, why
why is it? Why are we getting this outcome that's
you know, creating a sort of like cyclical prison out

(57:22):
of reality, And they can just say, hey, you know
it's the machine with the machine. You know it knows
what it's doing. Yeah, and then yes, the machine it
says I learned it from watching you dad, and you
have that moment of shame. So I think these different
categories that they that they bring up are really important
for helping us kind of sort our ideas into into
recognizable types for for ways that AI and robots could

(57:44):
go wrong and could potentially cause harm that you would
seek legal remedy for. And also they help identify the
spaces that there's the most worry. I mean, for me,
I think that would be like those last two cases, right,
the unforeseen problems and the systemic problems are the ones
where there's the most real danger I think and the
most difficulty in trying to figure out how to solve it. Yeah,

(58:06):
because we we kind of you know, train ourselves for
to a certain extent and sort of culturally focus on
the sky net problems, right, the really obvious um situations
where the robot car veers off the road in a
dangerous way. But the situations where it is just perpetuating

(58:28):
what we're already doing, where it's making choices in getting
from point A to point B that don't violate anything
we told it, but just are an uninventive and even
harmful way of doing it. Uh. Yeah, that's that's that's
harder to deal with. That's a type of misbehavior that
you can't solve by just having Dan o'harla Hay stand

(58:49):
up and bellowing behave yourselves exactly. Um yeah, yeah, yeah,
I mean I can't remember if that even worked. I
just remember that was one of my favorite moments, and
that was RoboCop two, right, was it? Yeah, well, I
mean RoboCop one. I think we're already dealing with this
problem of like the sort of like weird dynamics of
machine culpability. When Ed two oh nine like shoots that

(59:12):
guy five hundred times in the boardroom during the demonstration,
and then Dan O'Hurley he's response to it is to
turn to Ronnie Cox and say, I'm very disappointed, Dick.
But anyway, well, I guess we're running running kind of long,
so maybe we should call part one there, But we
will resume this discussion about about robot justice and robot

(59:34):
punishment in the next episode. That's right, we'll be back
with more of this discussion in the meantime. If you
would like to check out past episodes of Stuff to
Blow Your Mind Uh, and they're definitely worth checking out
because we have lots of past episodes that deal with
robots and AI. We have lots of episodes where we
make RoboCop references, so they're all they're all there, go

(59:54):
back and check them out. You can find our podcast
wherever you get your podcasts. Just look for the Stuff
to Blow your Mind podcast need uh in that feed
We put out core episodes of the show on Tuesdays
and Thursdays. Mondays, we have a little listener mail Wednesdays,
so that's when we do the artifact shorty uh usually,
and then on Friday's we do Weird House Cinema. That's
our chance to sort of set most of the science

(01:00:17):
aside and just focus on the films about rampaging robots.
You just thinks. As always to our excellent audio producer
Seth Nicholas Johnson. If you would like to get in
touch with us with feedback on this episode or any
other to suggest a topic for the future, or just
to say hello. You can email us at contact at
stuff to Blow your Mind dot com. Stuff to Blow

(01:00:43):
Your Mind is production of I Heart Radio. For more
podcasts for my heart Radio, visit the iHeart Radio app,
Apple Podcasts, or wherever you're listening to your favorite shows.
Pretty Toy pop po

Stuff To Blow Your Mind News

Advertise With Us

Follow Us On

Hosts And Creators

Robert Lamb

Robert Lamb

Joe McCormick

Joe McCormick

Show Links

AboutStoreRSS

Popular Podcasts

Dateline NBC

Dateline NBC

Current and classic episodes, featuring compelling true-crime mysteries, powerful documentaries and in-depth investigations. Follow now to get the latest episodes of Dateline NBC completely free, or subscribe to Dateline Premium for ad-free listening and exclusive bonus content: DatelinePremium.com

Las Culturistas with Matt Rogers and Bowen Yang

Las Culturistas with Matt Rogers and Bowen Yang

Ding dong! Join your culture consultants, Matt Rogers and Bowen Yang, on an unforgettable journey into the beating heart of CULTURE. Alongside sizzling special guests, they GET INTO the hottest pop-culture moments of the day and the formative cultural experiences that turned them into Culturistas. Produced by the Big Money Players Network and iHeartRadio.

Music, radio and podcasts, all free. Listen online or download the iHeart App.

Connect

© 2025 iHeartMedia, Inc.