Episode Transcript
Available transcripts are automatically generated. Complete accuracy is not guaranteed.
Speaker 1 (00:00):
Brought to you by Toyota. Let's go places. Welcome to
Forward Thinking. Hey there, and welcome to Forward Thinking, the
podcast that looks at the future and says, Robot Parade,
Robot Parade, what have the flags that the robots made?
(00:21):
Jonathan Strickland and I'm Lauren Bogglebaum and today our third host,
Joe McCormick, is not with us, but he will be
back soon. And in the meanwhile, if you couldn't tell,
we're gonna be talking about robots again, robots from So
it's it's a shame that Joe can't be here because
he actually pitched the other show will be recording and
that will come out later this week that also has
(00:44):
to do about robots. But we're going to do our
best to uphold the high standards of of robot integrity
that that Joe embodies, because as we all know, Joe
is robot. He is mostly most of that is clamp uh,
but they're delightful clamps uh and and really integrity is
(01:05):
what this episode is going to be about. Yeah, it's
kind of alright. So a little backstory here. Back at
the end of May, uh, there was a report that
came out that spurned a sparked, I should say not spurned,
sparked a lot of conversation about robots and rites. And
so we're gonna talk about what that was and what
(01:27):
was said in that report. But before we get to that,
we should also mention that this episode has a lot
of tie ends with tons of other episodes we have
previously recorded, because, as it turns out, we love us
some robots. We we talked about robots a lot here
on the show about the future of technology and science.
Strangely enough, Yeah, no, okay, so so we've got we've
(01:48):
got what happens when a robot breaks the law from
March of Was that was that so recently? It was
a million years? Right? Yeah? And then there's but then
there's also well what steal our jobs? Which in December?
And will robots take our jobs in May? Something tells
me we didn't really pay attention when we recorded the
second one. One of those have been a video. It
(02:10):
might have been it could be that one is a video,
or it could very well be that we recorded two
episodes with the same title. I mean to be fair,
this is an ongoing conversation that is getting more and
more complicated. We also have one on universal basic income,
which ties into those concepts as well, absolutely, because you
know when robots steal our jobs, that is when university
(02:32):
basic income is going to be how do I get
some Money's right? We also have AI Friend or Foe
from April two thousand fourteen, What if the President were
a robot? That was also a Joe suggestion that was
also April two and more. Yes, if we listed every
(02:52):
single episode, I mean really, just pull up our RSS
feed and search robot. I think I got hits to
be fair. That also includes all the metadata, so robot
shows up a lot, but thirty four hits and we
have not been recording that show that long. I mean,
so why are we going to talk about again? Because
(03:13):
there was a report for the European Parliament and that's
the one that I was talking about that came out
at the end of May and had proposed some provisions
to grant personhood to robots electronic personhood d yeah, among
other things. That's electronic personhood is probably the element that
I think has had the most conversation around it after
this report came out. Yeah, because it's kind of the
(03:34):
flashiest one and it's it's sort of the most far
reaching part of the report, but a lot of the
report is actually very grounded and down to earth and
and I I love it, like I love a lot
of the ideas in here. So yeah, we were really
excited about talking, uh with with with ourselves and to
you guys. I was about to say with you guys,
but I guess you can't practice. I mean, they can
talk back. We just we're not ignoring you. We just
(03:57):
literally cannot hear you. Wait was oh no, that's no,
he's eating something crunchy at any rate. So we wanted
to really kind of boil down what was in this
report because there's a lot more to it than just
the electronic personhood. Like you were saying, Lauren, that gets
a lot of attention because it almost it's like saying,
(04:17):
would you give your vacuum cleaner rights. We've we've had
these discussions here on this show. The cool thing is
at the end of those discussions we always like, you know,
we really need to talk about this. Yeah, it would
be wonderful if elected officials would begin considering this sort
of thing. And it's happening. It's crazy, alright. So the
(04:37):
report came out sixteen We should also say it's a
draft report, or at least that's how it's titled. Yeah,
it's a proposal. Yeah, it's called draft Report with Recommendations
to the Commission on Civil Law Rules on Robotics. The
capacity to make even very exciting things sound very dull.
I mean, how Yeah, there's gotta be a rule somewhere
(04:58):
that if you are proposed something, or you've created a report,
that the the title must be one of two things.
It must either be excessively long and with at least
right and and by being excessively long, you have already
told somebody everything they need to know about what's in
the report. Or it has to incorporate a stupid acronym
(05:23):
that has been tortured so that it can encompass the
various concepts that are in the report. And if you're lucky,
you get both. Yeah. So the thing about this report,
it had some awesome stuff in it. So, first of all,
the general purpose of the proposal is to start those
official discussions into developing policies and guidelines in the field
(05:45):
of robotics, particularly since individual member states in the European
Union are developing their own policies over time. So the
fear is that you're gonna get all these different countries
making their individual rules, and then you're gonna have conf
x between the way one country legislates robotics and the
way another one does. And especially in the European Union,
(06:06):
where everyone is working together very closely and literally working
in other countries than the one of their own origin,
it gets really important, really quickly to have agreement in
all of exactlyly. Yeah, if you if you got your
training and your experience in one country in the European Union,
and you are working in the robotics field in some capacity,
and then you take an opportunity in a different country
(06:27):
in the European Union, which is perfectly legitimate, you might
find yourself struggling because these different number states have different rules.
This is a proposal to make more of an umbrella
of rules so that it's kind of an even playing
ground throughout the entire European Union. Now, to tell you
how cool this report is in many ways, the introduction
(06:49):
of the proposal at the very beginning section A. Under
introduction they reference Mary Shelley's Frankenstein the Pygmalion myth. They
all so I didn't put in the notes They also
mentioned the progue golem, They mentioned catal Copex. Are you
are or Rossum's universal Robots from where the term robot originates? Yes, Yes,
(07:12):
that's where it was coined. Ye. And it's already ten
times more awesome than any government proposal I've ever read,
because you've got all these things in the introduction talking
about throughout our our history, we have created fiction that
talks about our our desire to create a synthetic form
of life, or at least intelligence. In their case, they
(07:35):
were talking about intelligent machines, although I would argue Frankenstein's
monster is not really a machine. It's an artificial Yeah,
I guess so, I mean on the same level that
any human is a machine. Um. Now, section B that
each of these sections, by the way, it's like a
paragraph under the introduction. But Section B of the introduction
posits that were on the threshold of a new industrial
(07:56):
revolution that is powered by artificial intelligence, and that part
of society will remain unaffected by this revolution, and so
it's wise to consider the implications now again, which is
what we say basically at the end of every single
podcast that we do about robots, right, and particularly ones
about the idea of you know, will automation UH ultimately
(08:17):
take more jobs than it creates? How are we going
to deal with that? That's really what that paragraph is
getting at, saying, look, we can't even fully anticipate the
consequences of this. We know what's happening. We know that
robotics and artificial intelligence, machine learning, automation, all of these
things are going to lead to a crazy amount of
(08:40):
change on par with the industrial revolution of the past.
We need to start talking about that in an official capacity,
not just like thought experiments, but actually, like, let's lay
down some guidelines so that we don't mess things up
too much. Yeah, and and by by way of kind
of providing proof of that, it points out that robots
(09:01):
sales have been increasing exactly. Yeah, they're saying the robots
sales been on the rise, especially in UH areas like
car manufacture, automotive industry, the auto nation, in the automotive
industry has really been on the rise, as well as
just an electronics in general. And also states that robots
will provide numerous benefits in the short to medium term,
(09:22):
with a potential for quote virtually unbounded prosperity end quote
in the long term. That's a lovely phrase. Yeah, it's
kind of like the Star Trek belief where you know,
you've got like everything on demand all the time because
of this wondrous world. It's probably not going to quite
that extent because, as we have talked about in the past,
(09:42):
to get to that Star Trek future, we'd also have
to find like some sort of crazy energy sources, and
we don't have dilithium crystals yet, so not yet. Keep
keep digging, everyone, yep. So the flip side, however, because
this isn't all about just sunshine and rose is here
robotic roses now, they wanted to say, like, hey, we
(10:03):
got to consider some of the possible bad things about this.
They say that these advances quote may result in a
large part of the work now done by humans being
taken over by robots end quote, and that that would
affect not just employment, but also systems like social security,
which rely on employment taxes for funding. So, which is
a thing that I had somehow never thought about before.
(10:24):
But it is yeah, clear, I mean, once it's pointed out,
it's like, of course, social security is a thing that
we need to think about in this crazy future of robots.
Taken her jerbs. Exactly. If you if you are not
having enough money paid into Social Security because so many
jobs have been taken over by automated systems, then social
(10:44):
Security itself can no longer perform the function it's meant
to do. And you start to see a pretty precipitous
decline in the quality of life for thousands millions of
people right well, based on that current system. I mean,
obviously if we had some thing like basic universal income,
then that would help. But see then you're all, how
are you funding that? Exactly, Like if you don't, if
(11:07):
you don't have people working, they can't make money and
you can't tax them. I mean like they're not. Yeah
you could, but they just have even less. I mean, like,
I have these fingernail clippings, what do you want. I've
got some like old newspapers and the garage I can
give you. I mean I was saving them for some reason.
(11:28):
But yeah, it's just it's and so again, this this
proposal is saying, look, we're acknowledging that these issues are
right on the cusp of happening today. We need to
start seriously talking about it, right and furthermore, on this
kind of social issue, side. You know what happens when
a robot breaks the thing, including a person. We need
(11:50):
we need some legal recourse for figuring that out exactly,
and we need to be able to assign liability in
those issues we need. We need to also understand how
do robots and humans interact in a way that is
a positive experience for humans? Uh, Sometimes that means creating
an experience that appears to be a positive experience for robots,
(12:10):
because even though at this stage, I would argue robots
can't appreciate a person being polite to a robot for example,
right the robot don't care? Yeah, it doesn't. It lacks
the capacity. We have not programmed it to love as
of yet, but or to feel slighted. Sometimes by creating
a system that encourages people to behave in a way
(12:32):
towards robots that they would toward another human being a
positive way, I would yes, that actually has a benefit
on the person. Right. Like, Like, we've seen some interesting
studies about when you create responses and robots so that
they seem to understand when they get something right or
(12:53):
get something wrong, and they seem to have an action,
people are feeling more receptive towards those robots and work
more easily with those robots. And so again, while it
may not have a direct benefit to the robot in
any you know, psychological way, it has a benefit on
us as we interact with those robots, which is it's
(13:13):
one of those things that you don't necessarily think about
when you start talking about policies. Uh. Also, they say, hey,
what happens, um if robots like get super smart, like
like way more like more than human smart smart like
like like not just human smart smart, but they're also
autonomous so that they can act on that smart smart.
(13:34):
They don't necessarily need to be conscious or self aware,
just really really intelligent. And so essentially they're saying, we
need to think about the possibility of robots becoming a
danger to the human species. I don't let's not create
a butt that's gonna kill us. Let's not skynet this thing,
(13:55):
let's know not that. Yeah, let's do the opposite of that.
Well maybe not the opposite, but the different option at least.
So considering all of those things, the introduction concludes, the
European Union should really get off its stuff and start
talking about these ideas and work on a strategy to
avoid problems in the future. So that's just the introduction. Yeah,
(14:16):
it's just laying everything out there for you. And then
what they do later on in the report is kind
of laid down general principles and some guidelines that they
would suggest, But those guidelines are very general. We're you're
going to hear the same thing throughout the rest of
this podcast where they say we need to form a
committee specifically to set this up. Yeah, that this entire
(14:38):
proposal is like is like one of those meetings that
you're just sitting there and planning more meetings for the future,
right right, and then eventually you later on down the
road hold a meeting to discuss how you can have
fewer meetings and that's when everyone loses their mind and
run screaming from the room. But right now, where at
a stage where they they are saying like, hey, we
don't have all the answers. This this document is not
(14:59):
meant to because he's not an answer. This is just
a suggestion for how we might come up with the
exactly this isn't this isn't that that guide you would
find in Beetlejuice about how to uh, you know, how
how to work would they recently diseased I'm sorry deceased
jokers in the movie anyway. So the next section is
titled General Principles. Are at the beginning of it as
(15:22):
general Principles and the reports sites Asimov's Laws of Robotics,
because of course they do seek reference number like eighteen.
Again it's pretty awesome. Yeah, yeah, so if you are
if you are rusty on Asimov's Laws of robotics, they
are first, a robot may not harm a human or
(15:42):
through inaction, allow a human to come to harm. Second,
a robot must obey and the order is given by
a human unless it would violate the first law. A
robot must protect itself from harm unless doing so would
conflict with either of the first two laws. A fourth
and or zero with depending on how you choose, because
it was added a little bit later, A robot may
(16:04):
not harm humanity or by an action, allow humanity to
come to harm. Now, these are not innate laws, right
there's These are just good ideas. These were good ideas
that a science fiction author came up with for the
purposes of creating interesting narratives that would show how even
these very basic ideas could sometimes have consequences. Yeah, that
(16:28):
you didn't anticipate, but we generally since then have said
those laws of robotics seem like a really good idea
that lets kind of lay it out just how we
want it. Yeah, if you've got a big old metal
thing that weighs like a half ton and it can
move around and whack stuff, let's try and make sure
it doesn't whack people. Yeah, preferably so. Uh, then we
(16:49):
get the liability section talking about whacking people. This is it,
This is what this is about, right, Like, how do
we determine who is or what is liable in this
in very scenarios? And this relates very closely to that
podcast we did in the past about what happens if
a robot breaks the law. So we get to a
(17:11):
point where this is where the report first suggests that
the European Union might need to classify robots under a
new category, either like a category like persons, like essentially, say,
can we treat robots like people in the sense of
liability like legally speaking, legally speaking, or would it be
more like animals would be more like objects? Or do
we need to create an entirely new classification for an
(17:34):
intelligent non human entity, right right, something that can behave
at least semi autonomously, and something harder than a dolphin
less bound to the sea. Yes. So, first thing they
point out is that AI and machine learning are transforming
how robots interact with environments. And we wouldn't necessarily call
(17:56):
even the most advanced AI that we have operating today
in a thin close to being conscious or self aware,
but that almost doesn't really matter, right, It could still
be active in a way that could potentially be harmful.
And we're seeing more applications that allow machines to learn
from their environments and adapt their approaches to complete certain tasks.
(18:16):
That complicates that the whole autonomous part that makes robots useful. Yeah,
and the idea that you know, since we've been seeing
machine learning like go from observing a pendulum and determining
what the laws of physics must be based upon those
movements up to the you know, learning what a cat
is based upon just feeding it information, you know, and
(18:37):
and then getting to a point where we can maybe
create an artificial intelligence, set it in a room, give
it a task, and have it learned from its mistakes
so that it eventually begins to do that task more
and more efficiently. That's that's creating something very different from
just this machine will follow a specific set of instructions
and repeat it exactly every time until it breaks. Right,
(19:00):
And because right now, the way that robots work, take
a take a roomba for example, nice easy, nice easy
thing that you may or may not have interacted with. Uh,
right now, if a roomba trips and trips you and
you die, Um, you sue you see the company, You
see I robot? Yes, I think so, yeah, exactly believe
it is I robot. So that would be that that's
(19:22):
exactly the case. Because robots right now, for the most part,
are essentially machines that do a very specific task, and
they do it pretty much the same way every time,
even with something like the roombo which has some object
avoidance and some other little bit of learning for for
mapping purposes. But but it's very basic, right, It's not
like it's not like the room but suddenly going to
vacuum in a totally different way. Right, it can't. It
(19:45):
lacks that capability. But at this point we would say,
all right, well, if the if the machine were to
cause harm through its uh, it's normal operations, You would
essentially say the manufacturer, the producer of that. Maybe the
programmer is liable, right, But that gets more complicated as
you get into these these devices that have machine learning
(20:07):
and autonomy. Yeah. Yeah, because if it's learning how to
do new stuff, if if it's interacting within environment that
the programmer didn't account for and therefore learns something harmful,
then do you blame the manufacturer? They actually the report
actually says that the more it learns, the less you
can really blame the creator, right that that it's behaviors
(20:32):
that's learning are due to whatever environment and variables it's encountering.
And it may very well be that if you were
to trace that behavior all the way back down to
the gem, the germ of an idea, the nugget of
programming that eventually evolved into that behavior, it would be
very difficult to say, well, this is clearly the fall
(20:54):
of manufacturer, which means that maybe the robot itself has
to be liable, which raise as questions like, well, how
how do you deal with that? You've got to lie
bad robot, I mean, the robot can't think or feel.
You can't punish the I mean, if you punish the robot.
It would be meaningless, right, Like, the robot can't feel
(21:15):
badly about it. It It can't learn from that experience. It
can't be like, Wow, it put me in solitary and
now I'm I'm gonna really turn things around. Yeah, I mean,
you're not gonna get like a robot shoshank redemption thing going.
I mean, it's it's really it's really shady whether or
not humans are penitent by being put in penitentiary here.
That is an episode I want to do, yes and
(21:38):
upsetting episode. Probably a lot of not shouting at each other,
but shouting just into the void. Yeah, a lot of that.
But at any rdy getting back to the robots, so
we they the report says, we need to perhaps come
up with some uh definitions so that we can figure
out ways to assign liability in those cases where machines
(22:01):
are having this level of artificial intelligence and autonomy. Yeah,
and it furthermore proposes to to create a system of
registration for robots that are so advanced this could become
a problem. Yeah. This is like almost like saying, you know,
the way you might have to register a weapon, The
idea that you know, at a robot there's a sufficient
level of intelligence also carries with it a certain amount
(22:24):
of risk towards humans, and that intelligence may not necessarily
come in the form of physical violence. It could come
into form of sharing your information in a way that
you had not anticipated, or perhaps uh spying in the
sense of like a video surveillance type of issue. So
there the draft says, maybe we need to have a
(22:45):
registration system, and once in a robot reaches a certain threshold,
whether it's a threshold in artificial intelligence or just the
various features in the robot, at that point, the producer
must register the robot so that there is a government
document connected with that specific entity. In the case that
(23:07):
that entity goes robo berserk, or does a make some mistake,
or operates in a way that was unintended but causes
harm in any of those scenarios, you've gotta have a
way to assign that liability. So this would again fall
into into that category, right, And then in order to
kind of assist with that, it calls for more funding
(23:29):
for research projects, particularly the ones that that are are
looking into these social and ethical challenges that are that
are raised by these kinds of advancement, right, because the
report recognized they said, hey, we realize this is not
this is new. It's going to require experts and and
it's going to require funding. It's going to require money.
So we need to actually seriously talk about this, not
(23:52):
just say like, hey, let's throw ideas around, but let's
let's create specific entities, and those entities have the responsibility
of really seriously tackling these issues and coming up with
guidelines that are uh, that are aligned with those ideas.
Um and that the European Union should create quote legislative
(24:13):
instruments on legal questions related to the development of robotics
and AI end quote that looks ahead ten to fifteen years. Yeah,
hard to do. That's nuts. We love to to speculate
about the future. Obviously, that's what this podcast is all about.
That's literally what we do here. Yes, um, but I mean,
(24:34):
but we also make so many jokes about forty years
and also, wow, gosh, I keep forgetting how awesome that
little jingle is. Also what we say, does it become law? Right?
And thank goodness for that, Jonathan, no joke. I mean
(24:56):
when we get that that acts of destiny swinging around,
you want that stuff to Sometimes we're whimsical, and by
whimsical I also mean capricious to a level that's probably irresponsible. Anyway,
the last time I took a test, I came out
chaotic neutral. Yeah, that's terrifying. Chaotic neutral is the scariest
(25:16):
of the alignments, isn't it? Fine? I mean chaotic evil.
At least you know that whatever they're gonna do, it's
gonna be bad kadock neutral. It's just like I'm gonna
close my eyes and throw a dart. Will I hit
a board? Maybe? Um? So. The interesting thing here though,
again is that they want to look at creating again
a specific official entity that is dedicated to answering legal
(25:42):
questions and speculating about what questions will need to be
answered more than a decade out. So saying, let's not
wait until we've reached a technical, technological level of sophistication
where we then have to backtrack and figure out a
way to litigate around issues. Yeah. See, let's proactively get
out there and be looking for the most cutting edge,
(26:04):
the most ridiculous even um possibilities that could come up
in the future, and let's plan just a little bit
for those. Yeah. And it's so different from the way
we've seen the law and technology interact in the past, Right, Typically,
we see technological advancement leap way ahead of technology and
(26:27):
then we have this incredibly Yeah, and it tends to
be messy and kind of embarrassing if we're being real honest. Yeah,
I mean, just look at the Internet and telecommunications here
in the States. I mean, that's been a huge pain
in the butt and that has led to big arguments
about net neutrality. And so what this draft proposal is
(26:47):
saying is that they want to avoid having as many
of those scenarios as possible by proactively thinking of ahead.
It's just I don't know how they're gonna do it
in a way that it may very well mean that
they create policies that are covering a specific pathway, and
as we see technology take a different pathway, they'll just
have to switch gears. Uh. It's a it's a challenging thing.
(27:09):
I'm sure it would be a very frustrating thing in
many ways, but it's also really important. Um So, the
draft proposal also calls for a legal solution that doesn't
restrict the type or extent of damage as a person
can seek based solely on the fact that the damage
that was caused came from a non human agent. In
other words, you should never have a situation where a
(27:31):
court tells you, well, we can only award you x
number of dollars or euro in this case because your
mugger was a robot, So there shouldn't be a specific
limit just because of the robot being the cause. Yeah,
I mean also like, don't create a mugging robot, y'all.
But yeah, I mean, let's that's going down the futurama path, which,
(27:53):
while entertaining, is really not practical. No hobobots, no orphan bots,
come on. Just but but but joking aside, I mean,
we're already seeing a need for this, uh in cases
for example, like like the recent and tragic depth that
resulted from a Tesla car crash just a couple of
weeks ago. Um, the driver had had his Tesla car
(28:14):
in autopilot mode, which is not in fact autopilot but
rather a driver assist kind of function. Yeah, I have
multiple times over the last week yelled that we maybe
need to change the name of that feature, that it
is inherently misleading. But but right, I mean, you know,
(28:34):
in this case where where it's driver was in the car,
the car was in driver assist mode. Uh, neither the
driver nor the computer in the car recognized that a
semitruck that was crossing his path was there because of
just glare conditions, or maybe the dude was looking away.
I don't know. Yeah, yeah, we we don't have I
don't think all the details will ever be available because
(28:57):
we just don't know what was happening at that moment.
But as as you're pointing out, like, this is an
issue where there's the question who ultimately is liable in
this case, and without having the legislation there to kind
of create that, it means that it means that the
courts have to start figuring it out on their own,
(29:18):
and that's a rough way to make legislation. Um. So,
moving on with the liability section. They also suggest that
the producers of a robot are liable for damage on
a level proportionate with the amount of instructions the producers
gave the robot. This is the part that I find
really fascinating. Yeah, so this this would be like the
(29:39):
more simple your robot, the more liable the producers are
for any damage the robot creates. Yeah, if you get
crushed by a dumb robot. It's it's the manufacturer's fault, right,
because they're the ones who like, if the robot can
only follow instructions that were created by the manufacturer, and
through the operation the robot has hurt someone, than the
(30:00):
manufacturer would be considered liable, assuming that you're not having
a situation where a person has flagrantly ignored warnings or
safety features, like in a big automation thing, there's usually
rails and all this other stuff separating the robot from people.
Do not walk into the clamp function of the robot
exactly kind of in those cases, you would you could
(30:21):
argue that, well, the person who was hurt was at fault,
but they're talking about in other in other situations where
there was an unintended consequence in the normal operation of
the robot. However, the more the robot acts as an
autonomous entity, the lower the responsibility of the producers of
the robot. So if a robot damage is property or
hurts someone through an an autonomous decision that was not
(30:44):
directly traced to its fundamental programming. In other words, this
was some sort of learned behavior that went awry, the
robot ends up being liable not the producers of the robot,
which is a pretty radical idea. They also say that
the longer a robot has received quote unquote education, the
more liable the robot's unquote teacher is for any damage
(31:06):
the robot causes. So, in other words, if you have
the robot in a lab setting and you're training it
in the lab, and you're training it over and over
again with lots of different ways, and then you put
you implement the robot somewhere and it hurts somebody, you
could say, well, the teacher was the one training the behaviors. Yeah,
that the manufacturer didn't do it this. This person trained
(31:27):
this robot to do this. So yeah, yeah, So if
you get crushed by a dumb robot, it's the manufacturer's fault.
If you get crushed by a smart robot, it's either
the robot's fault or your own faulty. And so what
do you do in that case? Like, all right, well
now we know who's fault it is. What then? Well,
the report has some suggestions for that. They said, what
about an obligatory insurance scheme similar to that that you
(31:49):
would have to have if you wanted to operate a car,
except in this case, the people making the robots are
the ones paying the insurance, which which I'm not sure
how I feel about. I'm I don't know. Yeah, it's
weird because in this case you wouldn't say, like going
with the Carmen before, would be like saying, well, the
car manufacturer, the automaker has to pay for my car
insurance and an idea which I love. Now, if it
(32:13):
were an autonomous car, you might say, well, that makes
sense because I'm not the one controlling the vehicle. If
it's a it's a car that's under my control. However,
you could argue, well, it's the driver's behaviors, unless you're
talking about a faulty vehicle, in which case there could
be a recall or something like that. So they're saying no,
the producers would pay out the insurance for the robots
they create, and if the robot were to cause any damage,
(32:36):
then that insurance would end up paying for that damage.
Or there could be a compensation fund for the robots themselves,
uh yeah, to pay them for all of their hard work,
so they'll feel greater job satisfaction, right and be able
to buy the good oil, right exactly. Finally, they won't
be they won't be sledging through like the used fry
grease over at the fast food restaurant. They'll finally be
(32:59):
able to go in and order a pint of w
D forty or something. No, no, this is this is
not to make the robots feel better about the job
they are doing. And actually it's an interesting and practical
solution to this issue, saying, let's have a compensation fund
for each individual robot, not so that the robot has
spending money, but so that if the robot were to
(33:20):
cause harm, then there's actually money that belongs to the
robot that could be paid out to the person who's
injured or there or whoever owns the property that was
damaged in the case of that sort of thing. Uh,
And that the robot itself is paying for thee although
(33:42):
ultimately it's whoever's compensating the robot. But but this is
this is a way of saying like, well, you wouldn't
go after the business necessarily if the robot itself had
the funds and its compensation fund, and that you would
even possibly invest money on behalf of the robot, which
is so robot would hapen its own four oh one
(34:02):
K or something. It's weird, right, yeah, yeah, and and
and that that's the point in the proposal at which
it says again like like really though, we're going to
probably need a different legal distinction for romance at this level,
um than just yeah. And and they also say that
the compensation fund is just it's a possible solution. They're
(34:23):
not even saying, like we're saying, this has got to
be the way. They first start with the idea of
the insurance, then they move on to possible compensation fund,
but they admit like this is just the sort of
brainstorming ideas we have and something that would need to
go into further discussion with an actual committee. Um. And
then we move on to the next section, which is
(34:44):
all about ethical principles, and it calls for the design
of an ethical framework to make certain that advances in
robotics are made with considerations to the impact on human safety,
privacy or privacy, integrity, dignity, autonomy, and data ownership. So
which which is a really good spread. I'm proud of
them for for putting privacy and data ownership and stuff
(35:05):
like that. I'm there too, really impressed. I was consistently
impressed by this report at seeing the scope that they
were taking, because they it was so far beyond just
simple ideas like they're taking our gerbs, or or robots
are big and scary and can smash my face in
or things like, you know, the stuff that we typically
talk about. They went well beyond that. I mean, they
(35:27):
covered those items too, but that was kind of cool.
They go on to say that the risk of harm
to a person should be no greater than that encountered
in ordinary life. So, in other words, in the future
that's filled with robots and all sorts of roles in
the job, social robots, care robots, whatever they may be,
a person should encounter no more risk than they do
(35:51):
right now in an ordinary day. So it should Robots
should not make the world riskier in the words, um,
which is really what we're hoping for. Yes, I mean
to decrease the risk would be awesome. I mean, but
I mean at least don't make it worse. So I
(36:11):
love this. I love this idea, Like, well, like, look,
we just we know how humans are. Can we can
we establish a baseline of suck exactly? Like? Can we
make it not suck more? If we open up any
history book we see our propensity for making things way
worse early on in the development of any given technology.
(36:32):
We think it'd be awesome if we didn't do that
this time. Uh So to do all of this, you know,
they admit like this is a lot to talk about.
They say that there must be a new European agency
specifically dedicated to the field of robotics and artificial intelligence,
and that it has to be funded properly. It can't
(36:53):
just be an agency in name, and it needs to
be staffed with technical experts as well as leaders in
coal and regulatory fields. So they're saying, like, we need
we need smart people. Yeah, we need people who understand
the technology, who understand the hypotheticals about the technology, and
also people who understand the law and understand how not
(37:14):
to be dicks to people exactly. Yeah, I mean this
is this is not a small thing. This is you know, Yes,
you need engineers, you need the people who know how
to solve a problem, but you also need the ethical
experts who can say, look, just because this is the
most efficient way to solve the problem doesn't mean it's
the right way, right, especially when you have a world
that also happens to have human beings in it, which
(37:36):
which fingers crossed again, like, yeah, you know, I'd like
the future to have I suppose, yeah, at least a couple, right,
I mean, you know, I need I need a couple
of folks to watch my videos and listen to my podcast.
Otherwise what am I going to do? Right? And then
it and then it goes into a bunch of specific
sections about different different about more committees and about more
(37:58):
government and more things to think about. So and well
we'll cover these in brief. So one of them is
about intellectual property, which they want to find a way
to both protect and encourage innovation. So it's kind of like,
you know, the idea of patents, the idea that they
protect an idea, but they also allow people to see
what the idea is, and then once the patent expires,
they can also use that as the jumping off point
(38:21):
for new inventions that kind of stuff. Right, and not
only in terms of making robots, but in terms of
what robots make. This is the really again, something I
wouldn't have necessarily thought government report would include in it.
But yeah, they say, what about works that robots make?
Like let's say a robot writes a song. For example.
We've talked about this in the past about robots and
(38:42):
their role in art and what is art? And can
a robot make art? Yeah, and so their their need
to be laws in place for what happens when that happens,
because it is happening right now. Yeah. Who owns the
copyright to the music of robot creates? Is it the manufacturer?
Is it the person who owns the robot? Uh? Is
it whomever gave Does the robot have ownership of that?
(39:04):
That art and the money it makes go back into
that like exactly right, Like wow, our Paul McCartney bought
went Insane and slaughtered fourteen people. Thank goodness it wrote
that hit song five years back. Because we can afford
to pay off all the legal fees. We laugh or
we cry, don't don't do that. It was the first
(39:28):
name that popped in my head. Uh. And to be fair,
I mean, come on, he's he's part robot too. So uh,
if if this in fact happens, how do we assign
that ownership? And again they don't have the answer for it.
They're just saying, this is a question that we need
to answer. So we need to bring it up and
say This might seem frivolous, but it's absolutely something that
(39:50):
is happening, and so we have to come up with
our legal answer for it. Yeah. This next one is
one that I'm all, I'm so excited about standardization, and
I feel like such a boring adult is excited about standardization.
But we've we've covered like home automation right home automation.
One of the problems is that if you don't want
(40:11):
all your stuff to come from the same company, how
do you make sure that stuff can communicate with each
other so you actually get that automated experience that you wanted,
right or or autonomous vehicles. I think this is one
of the biggest hurdles that we're facing right now to
autonomous vehicles is how do we create a system where
all of those vehicles from different manufacturers can talk to
(40:32):
each other, talk to the road, talk to the traffic signals,
et cetera. Yeah, where you can have that that fully
integrated system where cars are fully they know where everyone is,
they know where all the other other cars are. Not
only do they know where the other cars are, they
know what the other cars are going to do before
it starts to happen because they can communicate with one another. Well,
(40:53):
without those standards in place, you end up with a
lot of proprietary systems where all the vehicles made by
one company can talk to each other, but that only
represents called shoulder to those other cars. Yeah, and there
they represent a tiny percentage of all the vehicles on
the road. You don't really get a very you don't
have an integrated system in that approach. And so they're saying, well,
we don't want to see that with this future of robotics,
(41:16):
So let's start creating some standards that companies can follow
when they're designing their products so that we avoid this
in the future and we have a more seamless integration UM.
And in fact, they have another section on autonomous vehicles themselves,
and they specifically call out for standards for those autonomous vehicles.
They also mentioned care robots and medical robots. UM. They're
(41:39):
specifically in those cases talking about a need to develop
robots that where you're you're specifically thinking about how are
these going to affect humans? Right right, and and just
kind of I think it was a call for like
like research and development into, uh, how we can create
robots that can help with with care of the sick,
(42:00):
or the elderly, or you know whatever whatever group it is.
That and how can we create robots that are providing
care not just physical services, and and to make sure
it still maintains a sense of human dignity as well,
like not just that they are effective in doing what
they do, but they are they perform their their duties
in such a way that the person who's being cared
(42:22):
for doesn't feel less less for that. Yeah, and again,
being someone who currently and thankfully is healthy and able bodied,
it is a weird. It hadn't occurred to me. I'm
not I'm so privileged in the place where I am
in my life, it didn't even occur to me. And
I am so thankful to see that they're smart people
(42:43):
talking about this. Um. They also get into human repair
and enhancement. Enhancement, you say, yeah, so not just like, Okay,
while we've developed this technology that is an artificial heart
or an artificial kidney or an artificial liver or whatever
this is getting into when we developed technology that could
actually create, yeah, an upgrade to a human being without
(43:06):
prior injury, right, like like not like this arm is defective,
I need I need a new one, but like, no,
this arm is less cool than the robot arm. My
risk can't turn degree, so it's really a pain in
the butt to put a light bulb back into a socket. Yeah,
I just want to have one where I can spend
the risk in that ways and I'm done. Uh So
(43:26):
they're saying, well, hey, this is gonna happen. Maybe we
need to start thinking about the ethics involved in this
and actually get experts in hospitals and other healthcare institutions
who are thinking about these kind of issues and start
to work on legislation for that as well, so that
we don't enter into a an environment where people are
(43:49):
just willy nearly getting crazy upgrades. Maybe they're putting their
own lives in danger as a result, maybe they're creating
a bigger divide between the haves and have nots. They
don't want to do that, So that's what that section
is specifically about. Oh and they also have a little
section about drones and that bit, but I'm not going
to cover it because it's essentially the kind of stuff
(44:09):
you would imagine, like common sense things like privacy and security,
that sort of. So then they get into the discussion
about the impact on jobs and social systems. This is
probably the section that got the most attention in the
media about the idea of personhood, and it's kind of
also like the most immediate impact or issue. Yeah, because
(44:31):
we don't have robots at the level of artificial intelligence
and autonomy in the general sphere that we have to
worry about that right now. I mean, thinking about now
is good, but it's not like they're already out there
right The most advanced ones are in pretty secluded environments
(44:52):
for very specific research projects. It's not like you're gonna
encounter one as you walk down the street unless there's
been a terrible miss have Paul McCartney, Bob I was
thinking either that or it's Johnny five from Short Circuit. So,
according to one forecast that the reports cited, the EU
could face a shortage of as many as eight D
(45:13):
twenty five thousand Information and Communications technology professionals or i
c T professionals. And on top of that, it predicts
that of all jobs by will require some level of
digital skills. So the proposal calls for a revision of
a digital competence framework so that they can at least
(45:34):
help young people and really people of all ages developed
those basic skills so that they maintain a viable place
in the workforce. Um. They also call for designing programs
to encourage more young women into the fields of robotics
and the related fields, related technical fields, and they say
(45:55):
that the European Union and the member states within it
should launch initiatives in order to support women in I
c T and to boost their E skills, which is
pretty awesome. Essentially, this is a cultural shift, saying, guys,
let's stop sending a message that these are fields primarily
for dudes, right, because they weren't originally and and it's
(46:17):
ridiculous that they are today. Then that there are people
who are absolutely instrumental in the development of computer science
who happened to be women. Uh, I mean like you
you know, you've got a Lovelace who created the first
freaking computer programs, and you've got you've got the the
woman who does who came up with the coin computer bug. Like, uh,
(46:44):
there there contributions to computer science and to information and
communications technologies have been phenomenal, But in general, we've created
a culture. And when by we, I mean like everywhere,
not the two of us in this no, not I
have tried not to perpetuate, yes, but there's been a
culture that has discouraged women into going into those fields
(47:06):
to the point where men who are in the field
may have like they may feel like what what are you?
What are you even here for? Like would they see
a woman come into like an engineering class. And that
absolutely needs to change for multiple reasons. And uh so
they're saying, let's get on that, let's create programs and
(47:27):
change this perception that is based on fallacy so that
we get brilliance that were otherwise missing out on them.
So again it goes well beyond just the idea of
robots in that case, also very inspiring. So they also
call for a system to monitor job trends to see
(47:48):
where jobs are disappearing due to robots and automation, as
well as where are jobs being created because of robotics.
So essentially saying, well, we want to make sure to
steer people further away from these jobs that are increasingly
being taken by robots, but we definitely need more people
in these other areas that that are the opportunities have
(48:09):
opened up because helping shape education and training. Yeah, and
this is a this kind of goes in line with
what we were talking about when when we visited Georgia
Tech and chatted about, you know, the idea of will
robots take our jobs? And the answer is no, there
are gonna be other jobs. But there's the practical consideration of, well,
how do we how do we one identify what those
jobs are and to make sure that people are getting
(48:31):
the education and experience necessary to to do those jobs.
It's easy to say, oh, well, now we've got all
these more awesome jobs available, It's a lot harder to
practically get people in the right place. Yeah, So this
is proactively thinking about that. And here's a really big
one that that is the gem of the draft as
(48:52):
far as most people are concerned. Perhaps the European Union
should quote introduce corporate reporting requirements on the extent and
purport of the contribution of robotics and AI to the
economic results of a company for the purposes of taxation
and social security contributions end quote. So the idea that hey,
(49:13):
if you're a company and you have replaced your human
employees with robots, uh, maybe you have to pay a
certain amount. Yeah, maybe those robots are employees and you
have to pay social security taxes for each of your
robot employees, which sounds crazy because you're thinking like, well,
(49:33):
robots are never going to collect on social security. But
you get back to that question fund Yeah, exactly. If
if you get to a point where the Social Security
is being defunded due to the robots taking over more
and more jobs, you still have the need for social security,
but you don't have the money for social security. This
is a kind of a temporary solution because obviously this
(49:53):
can't be supportable long term. As more and more things
get automated, there becomes less of a need for money
in the first place. But until that happens, right right, Yeah,
there's there's an event horizon past which none of this
is important anymore. That's that star trek and then we're
a minute away from it, um and and it does
actually call out specifically like hey, maybe really we should
(50:15):
all be thinking about a basic universal income. Yeah, they
say that every model state in the European Union should
really seriously consider that that. It may not be the
right or it may not be the same approach for
every single nation in the European Union, but it very
well may be a conversation that should start to happen. Uh. Fortunately,
(50:38):
we do have nations that are debating on using this
at least in test cases or perhaps rolling out throughout
an entire country where we can look and see what happens,
and if it ends up being a massive failure, we'll say, okay,
don't do that. But we still need a solution to
this problem, perhaps some other solution. Yeah. Yeah. And then
the last big section in the proposal is licenses. Yeah.
(50:59):
Now they don't actually have like a specific you know,
you got to go and fill out Form A three
seventeen and get your robot license. They're laying out something
some basic rules guidelines they think that should be included
for licenses. And they have two different sets, one for
designers for robot producers, and one for users. And I'm
(51:21):
not gonna list all of them because it's they're fairly
extensive for both. Yeah, but but some okay, Yeah, they
had a lot of really fun and interesting ideas of
just suggestions of how you should go about robot ing. Yeah.
One step one one of the ones under designers is
let's have to kill switch on robots, like you know,
like a like a like just turn it off if
(51:44):
it just flop. Yeah, so if it starts going Paul
McCartney crazy, you know, buts going and say you just
and then like it just switches to wings mode or something,
it turns off. Make sure your robot is going to
operate in a legal way, So don't don't make your
burglar bot or mugger bot. As we mentioned earlier, this
again for licenses for designers, be transparent in the way
(52:07):
the robot is programmed, as well as the predictability of
robotic behavior. In other words, maybe don't call it out
a pilot. Yeah, it might be a really good choice
to start off with. The idea being that, you know,
if you are being sincere and honest in the way
that your robot was designed and and what it is
supposed to do, you reduce the likelihood of some unintended
(52:29):
consequence for their down the road. Yeah. I think of
this part almost like the FDA guidelines for how you
can market food on food packaging, um, and and just
you know, yeah, like like just be up and honest
with people and then they'll be able to make better
decisions and right right, yeah, and you know, and and
that way you avoid a huge class action lawsuit for
their down the line. Everyone's happy, Um, they say that
(52:53):
the developers should design tracing tools during the development stages
of robots so that when a robot behaves in aticular way,
it could be traced back to the design of the
robot itself. So in other words, it's it's kind of
like a uh, like a tracer bullet in a way,
you see the pathway that you can trace back. So
when it exhibits a specific behavior, you find out why
(53:17):
it behaved that way in that situation. And this could
be good or bad. It could be that you want
to find out, all right, well that was interesting. We
didn't anticipate that the robot was going to behave that way,
but it was beneficial, So let's find out what happened.
Out what happened. Or it could be that the robot
behaved in a way that would cause damage or harm
to someone, and then you want to find out why
did it get to the root of that problem. You
(53:38):
never programmed my bot to bash someone over the head. Furthermore,
it does also specifically say don't make humanoid robot. It
was not so much hid but but human like make
it clear that they're robots. Yeah, that's the most interesting
part about the little license to developers to be too.
(53:59):
It's like, make sure your robots are easily identifiable as robots.
Don't make robots that people will think are people or animals,
I guess. Or no replicants, no uh reboot, Battlestar Galactic
A cylons, none of that, I guess. I guess. Really
it gets down to everything, like, you wouldn't want a
robot that's designed to look exactly like a tree, for example,
(54:20):
but the tree is really a surveillance machine that's just
constantly recording audio and video. Because if you just see
a tree and you can't identify as a robot, you
don't know that you're being you know, watched surveilled. Now
if you're in the UK, which not part of the
European Union anymore, but you know you're being under surveillance
all the time. Anyway, I'm unduly upset about the idea
(54:44):
of a robotic surveillance tree. Yeah, I mean it. The
first thing we think of is don't make robots a
little like people. But really it's saying don't make robots
that don't look like robots. That doesn't necessarily mean they
you know, like the robot just has to be identifiable
a robot. It could be in humanoid in shape, or
it could just be like a bucket with wheels on it.
(55:04):
But you know that it's a robot, right, it doesn't
look like something else. Um, I love our future of
bucket robots. Yes, that's actually I'm I'm referencing an actual
robot that's haating the streets of London right now. It
looks like a bucket with six wheels, but it has
a lid and it's meant to deliver small packages with
it within two or three miles. All right, I take
(55:25):
back my my ribbing about bucket robots. I mean, you know,
I guess you could call it looks kind of like
a box with wheels. I said bucket because it's got
curved edges as opposed to rectangular like like part ninety
degree angles. But yeah, bucket bots are a thing man
onto the users. So they also had several I've only
picked three of them. One of them is that uh,
(55:48):
respect human physical and emotional frailty. So don't make your
robot do your bullying for you. Don't set your robot
to make someone feel badly about themselves or shake them
down for or milk money, you know, being respectful of
other human beings. Yeah, when you're using your robots, don't
be mean to people, right, general rule of thumb for
(56:10):
all the time, by the way, not just not just
when you're using your robots. Just don't people speaking of
you should respect other people's privacy right not to use
robots to spy on people. Right. So this is really
relevant right now with drones that have high definition cameras
mounted on them, the idea of you know, using a drone,
don maybe look into someone's window or something. I mean
that this is specifically saying, hey, don't do that. Yeah. Well,
(56:33):
and and furthermore, like even if it's a situation that
crops up where you suddenly realize that you've got a
robot in a room and it's inappropriate for whatever reason
to have that robot doing surveillance, turn off the surveillance.
And finally, don't weaponize your robot. I just don't do it.
Don't don't strap knives to your robots hands and just
tell it to go wind melling around or you know,
(56:57):
Paul McCartney needs no excuses to right, you just keep
that robot under wraps. No, seriously, though, it's that that's
one of the rules that they had keep in mind
when I say rules, these are all proposals. Even if
this were to be adopted, it would still just be
a set of proposal. Yeah, it's not. It's not like
it's a legally binding document um. So that gets to
(57:19):
some of the reaction to the proposal. Some critics are
saying that setting up guidelines this early, particularly for stuff
like the concept of electronic personhood, when we don't even
have robots that are closed to being conscious, is more
premature than not, and really that that being this premature
could stifle development right in other in other words, like
(57:40):
if you were to set up rules and regulations that
say X is off limits, someone who is trying to innovate,
may end up not pursuing a path of innovation because
they're afraid it's going to overlap on that restriction. And
then the what the critics are saying is that means
we get a much slower rate of progression when it
(58:02):
comes to we get no Paul McCartney bought at all, Right,
and think of the songs that we're going to miss
out on from Paul McCartney bought that's going to be
copyrighted in some way that we haven't determined. Yet they
also point out that this proposal again is not legally
binding legislation. So some people are saying, hey, what's the
point of this anyway. It's not like if if the
European Parliament said this is a great idea, that anything
(58:24):
would happen. That's not the point I would argue. I
would argue that the draft report is meant to say,
let's start the ball rolling and actually officially get some
committees in place to start forming what will be legislation. Yeah,
I get I get the argument, and in that it's saying, like,
stop making meetings to make more meetings. But but in
(58:45):
this case it's necessary, especially since like we advocate all
the time that we need to talk about this, and
then we say, hey, why are you talking about that?
That's crazy? But uh, finally we have that social security
section that has people scratching their heads because on the
face of it, it sounds like crazy talk to say,
a robot is not a person. Why would you have
(59:05):
a robot pay into social security? A robot is never
going to collect on social security. But when you look
at that larger picture that we've mentioned a couple of
times now, the idea of a social system that starts
to fold in on itself through lack of funding, it
starts to make a little more sense at least that
you need to find a solution to that problem. Maybe
having the robots essentially the owners of the robots paying
(59:26):
into social security on behalf of the robots. Maybe that
ultimately doesn't make sense, but we definitely have to think
about of a solution that does make sense because that
problem is going to be there either way. So uh so, yeah,
I mean, I'm I was so pleased to get to
dig into this document and uh and so pleased again,
(59:47):
Like I feel like I feel like I've said this
like nine times already, but like, just good job you guys,
y EU for getting this done. Yeah. I hope that
this leads to more official action. Yeah. Uh, it would
be great to see someone take the lead in this
space and say, I don't care if you guys think
(01:00:07):
this is silly. We have to prepare or else we're
going to be caught with our our digital pants down exactly.
I was trying to come up with what adjective do
I want to use for pants? Digital? Was perfect? Um,
so we don't want that to happen. Let's let's get
ahead of it. And I think that this is a
really interesting start, and I'm hopeful that it will continue
(01:00:29):
and that despite the jokes and the pranks and the
you know, the good natured like not so good natured
jabs on Twitter, that we see progress because it's it's
something that has to happen sooner or later, and that
or other it happens sooner. Yeah, it will happen whether
we like it or not. So let's let's be prepared.
But I'm curious to hear what you guys think. You know,
(01:00:53):
give it, give it a read if you like it.
It's twenty two pages long. Uh that includes the first
two pages that have just like the little contents of
stuff stuff, and then once you get into that, it's
a super fast read. I promise you. You can even
start skimming it in certain sections because they're reiterating stuff
they've said, so you can really breathe through it in
(01:01:13):
like a half hour easy. And that's if you're taking
time to make notes. I know because I've done it. Uh,
So check it out. It's available online. You can actually
get the PDF and read through the whole thing. And
I want to hear what you guys have to think
about that, like, do you have your own reaction to this?
Are there certain things that you think are great ideas?
Are you things that you feel are completely off off base,
(01:01:35):
off track? Is there anything that you think that they
missed entirely? Yeah? And uh. Also, obviously, if you have
suggestions for future episodes of forward thinking, you can send
those to us as well. Our email address is FW
thinking at how Stuff Works dot com, or you can
drop us a line on Twitter or Facebook at Twitter
where FW thinking. If you go to Facebook and you
(01:01:55):
search FW thinking in that little search bar, our profile
should pop up and you can leave us on sit there.
We'd like to hear from you, guys, and we will
talk to you again really soon. For more on this
topic and the future of technology, visit forward Thinking dot
Com Problem brought to you by Toyota. Let's Go Places,