All Episodes

July 14, 2016 70 mins

Scott Benjamin comes on the show to talk about two stories. One is about a proposal to give robots electric personhood status. The other, a discussion of Tesla's Autopilot feature.

Learn more about your ad-choices at https://www.iheartpodcastnetwork.com

See omnystudio.com/listener for privacy information.

Mark as Played
Transcript

Episode Transcript

Available transcripts are automatically generated. Complete accuracy is not guaranteed.
Speaker 1 (00:04):
Get technology with tech Stuff from stuff dot Com. Hey there,
and welcome to Tech Stuff. I'm your host, Jonathan Strickland,
together once again reunited with Scott Benjamin who was joining
me today in the studio. Thanks Scott reunited. Isn't there
a song? Reunited? It feels so good? Yeah, that's it. Yeah,

(00:26):
that's the one. Thanks for having me. I appreciate it.
And we're going to tackle a couple of different stories
that have been in the news over the past several
weeks now as at the time we're recording this, obviously
the episode will publish a little bit later, but we
wanted to talk about some stuff that has to do
with artificial intelligence, with autonomy, with vehicles, and also just

(00:47):
a robots in general. So we're gonna start with probably
the more sobering of the two stories, the story about
Tesla Auto Pilot um and specifically the fatal accident that
have and on May seventh, two thousand sixteen, involving Joshua Brown,
And and what does that mean? What do we know
about the accident as of the recording of this show,

(01:09):
What does it mean for for autonomous vehicles? Why, why
does Tesla maintain that Tesla autopilot isn't autonomous. We're gonna
get into all those details, but first I just want
to give a quick kind of layout of the order
of events. So the accident happened on May seven, six
in Florida. UM Joshua Brown was in a vehicle Tesla

(01:34):
Model S that what had autopilot engaged, and I was
traveling eastward down the highway when a semi truck was
turning left. The semi truck was traveling westward down that
same highway, turning left to cross onto another street, which
meant that the semi truck's trailer was crossing the traffic

(01:55):
that Joshua's car or the direction that Joshua's car was
going in. UH. Neither the car nor Joshua appeared to
detect the trailer. They didn't see it, didn't react to it,
and so there was a collision. We'll get more into
more details about the collision a little bit, and UM
Mr Brown passed away due to that. He died from

(02:17):
his injuries. The Tesla actually released information to the National
Highway Traffic Safety Administration about a week and a half
after it happened, giving them the details of the accident.
UH just a couple of days later, they held a
big shareholder meeting that's gonna come into play a little bit.

(02:39):
And then it wasn't until June that Tesla posted a
blog that acknowledged and disclosed this accident. And so this
is kind of it's kind of a multi tiered thing
that we're gonna chat about, not just the technology, uh,
not just what kind of of effect is this going

(03:01):
to have on the on on driver assist systems and
and autonomous vehicles moving forward, but also the implications some
people have raised saying that perhaps Tesla was trying to
hide the fact that there was this accident in light
of the fact that we're going to have a shareholder
meeting and and offer more stocks. They raised like one
point four billion dollars in stocks. So we're gonna attack

(03:25):
all of that. So the start, Scott, how would you
describe Tesla autopilot? Tesla autopilot? Okay, so I guess you
can't call it autonomous, right, That's the first thing we
need to get out of the way. It's it's a
legality issue. And we've talked about this, I think on
other podcasts that I've been here for sure tech stuff
certainly on car stuff and the it's it's again it's

(03:48):
a legal issue because you can't say autonomous because that
indicates that there doesn't necessarily have to be a driver
in place behind the wheel. And again it's just a
it's a word thing. It's a word play thing right now,
because most major manufacturers have autonomous like systems in the works,
autopilot like systems in the works um or they already

(04:08):
have them implemented, but they have a strict rule that
a driver has to be you know, in the driver
position at all times, ready to take control at any time.
And some of them even say that you have to
have hands on the wheel, as did the test La situation.
Here the Tesla, when you agree to enter autopilot mode,
you you have to I don't know, if you click
on something on that giant screen that they have essentially

(04:30):
like in terms of service saying like you are acknowledging
that in order to have autopilot on, you are supposed
to keep your hands on the wheel as well as
maintain alertness to your surroundings, not just you know, not
just shut down and allow the card to take over. Okay,
we have more to say about that as we go
here too, because you'll find that there's a lot of

(04:51):
shenanigans that go along with with people in autopilot mode
that shouldn't be happening because of this terms. You know, this,
this term of agreement that you sign, and so Mr
Brown had agreed to that, and then we will later
find out that he was in a negligence of that.
And listen, I want to also get this out of
the way too, is that we were not here to
point fingers at the dead guy or any by any means.

(05:14):
But um, as we go through this and see if
you agree with this as we talk here, but um,
I really feel that Tesla is not necessarily in the
wrong here, right, and I don't I feel it's more
the fault of the driver himself than it is Tesla
by by all means. Right, Let's let me see if
you if you think this is fair. I would I

(05:35):
would say that over the last decade, really a couple
of decades, we've seen more automated systems or driver assist
systems come into play. Some are are pretty you know,
still rely entirely upon human input. Like I would argue
power steering falls into that kind of category. It's the

(05:57):
car itself is not taking over any steering in power steering.
But that's a driver assist feature. It's helping someone operate
a vehicle without having to do ridiculous amounts of turning
a wheel in order to actually do what you need
to do. Sure, we've seen so many of these in
the past, I mean in the recent past, I guess
with with lane keep systems. You know that that's part
of the autonomous system that we're talking or almost said,

(06:19):
the driver assist system talking about from Tesla, the autopilot um.
You know it. It maintains the lane, It maintains a
distance between the vehicle in front of it and uh
and and tries to keep that at a reasonable I
would believe that you would be able to set the
distance that you want, or maybe it doesn't based on
a calculation of your speed versus the distance it needs

(06:39):
to break right, like like that basic rule of thumb
that you should be able to count to three when
you see the vehicle ahead of you passes something like
a lamp. Yeah, and then you street lamp, you count
you know, one, one thousand, two and thousand or three,
you know, and there's a there's a just kind of
a general rule of thumb. As you said, you know
that that uh, you know, at this speed, you have
this amount of time time in order to react, and

(07:01):
that's just something that humans need. I suppose it's a
little faster. Of course for an autopilot system that can
it can do things a lot quicker, although it has
to do so in a way that's safe and isn't
going to injure a person. So in other words, like
technically you could have a a a computer system follow
a little closer to a car and be able to

(07:21):
react faster, but momentum is still a thing. And what
about the driver behind you? Exactly? Yeah, behind got the
human driver behind you, so they may not be able
to react that quickly. And you also have the issue
of when you come to a stop, you've got so
much mass, you've got so much momentum built up that

(07:42):
even if your car is capable coming to a stop
at that at that reduced distance between you and the
person in front of you, they're still going to be
a very measurable uh effect on the person sitting in
that car. It's gonna you're gonna be jerked around a lot. Yeah,
and this wasn't necessarily the case with this crash. You

(08:03):
know it wasn't this is a truck. You have to
imagine what it would be like if a semi truck
was crossing your path in front of you, I mean
perpendicular to the road right, almost like you're coming up
to a crossroads and there's a semi truck going going
perpendicular ninety degrees from where you're going. Yeah, the cab
has passed in front of you. You're now at that
part of the trailer where there's nothing below it but
the cab or the trailers above you, and the rear

(08:26):
wheels have yet to cross the road. So that's the
position where this was. This was when Mr Brown went
underneath the trailer, and so you you know the you
you're thinking, well, how come he neither he nor the
car detected this. Well, there are a couple of different
things that people have said so far. One was that
the sky was brightly lit that day, that the trailer

(08:49):
itself was white, and therefore it was more difficult to
differentiate between the trailer and the sky. Uh So, from
just an optical standpoint like then the dual spectrum, it
was hard to see well for the computer to understand
that I mean, you would see a truck in front
of you if you had your eyes on the road.
That's That's the other part of this is that the

(09:10):
human element, I believe, and as we get as we
go through here, I think you're going to see more
and more that my my, my thoughts on this whole
thing are that he wasn't watching at all. I mean,
there was no there were no eyes up on the
road at all. This was completely an autopilot mode and
he was hands off for the whole thing. Because if
he were hands on, it is hard to imagine how
he could not have seen the truck simply because you're

(09:32):
talking about him traveling eastward. It was in the afternoon,
so the sun should be behind him, not in front
of him, so he shouldn't shouldn't have sun in his eyes.
But it's gonna be bright on the trailer, if it's
a white trailer, it's gonna be bright against that bright sky.
And I can see how that would be difficult to
differentiate between you know that in the sky for the computer,
I guess, you know, if there's no chrome trim or

(09:53):
anything that you know that makes a um some type
of delineation that says this is the beginning of another
chick and they can and then can detect it. It's
also moving all Another thing that this is all hinges
on is that there's an eyewitness report that came out
in early July on a site called Teslaaratti and it's
a it's kind of just a compliment compilation of Tesla news,

(10:15):
you know, of all all news, you know, SpaceX, Tesla Motors,
Investor News, etcetera. And one of the witnesses to this accident,
in addition to the truck driver, which we'll hear from
in just a minute, said that prior to this accident,
this is crazy, the Model S was on the same
road as this person, this this woman driver. She was
on us A in Florida, and the Model S on

(10:37):
on autopilot at that time, passed her while she was
driving at eighty five miles per hour. So he's going
in excessive eighty five miles per hour, where let's just
say it's it's a ballpark one hundred miles an hour.
Have you ever seen what happens to car when it
crashes a hundred miles per hour? It's it's devastating. I mean,
you've got to keep in mind that most crash tests
that are done are done at a much much lower speed,

(11:00):
and yet they're dramatic in the outcome. Yeah, you would
be shocked to see what kind of damage that happens.
And let's says, you know, a side impact collision, you know,
those those really dramatic ones where the whole the car
seems to bend in half in the car enters the
other vehicle. Practically that that's happening somewhere around thirty five

(11:21):
thirty thirty five miles per hour. This is this is
a car traveling at one hundred miles per hour, let's
say more than five to be fair, more than any
for sure. And uh and of course at that point
of the trailer which we're talking about it, what happened
was it sheared the roof off of the vehicle. And
I think we can all understand what that probably means
to the driver of that vehicle. It's horrific. So beyond that,

(11:46):
the car continued on, and it didn't continue on a long,
long distance. And I'm looking at a diagram that was
drawn up here of the crash investigation scene, and it's
not the scale or anything, but the car doesn't go
a whole lot farther in its in its own lane.
It did veer off the road, went through one fence
or went through a ditch, went through a fence, went

(12:07):
across the field, through another fence and into a pole,
so to ditch. Two fences and a pole, and that's
where it came to rest. Is After that it kind
of rotated to arrest. Um the truck. The driver of
the truck said that it all happened so fast that
by the time he didn't even I don't even know
if he felt the impact, of course, but by the
time he even checked his mirrors, the vehicle was already

(12:29):
beyond you know, the impact site. He couldn't he could
even pick it up. He couldn't spot it, um other
than looking, you know, I guess at this point he
had to look to his left to see it out
in the field somewhere. Um, it's moving, it's traveling that fast.
So there's this disconnect between what what happened in that
vehicle just seconds before the the the crash, the impact,

(12:50):
and what happened. And you know, I don't know if
I should say it that way. Maybe there's a I
guess just not enough information out there, like what was
happening in the car beforehand. It's not disconnected. It's just
we just need to know more information and we don't
quite yet. Yeah, the fact that you know, you had
you had different eyewitnesses saying that, like one saying that

(13:10):
there was a portable DVD player with a screen showing
a Harry Potter movie. That someone else said there was
a portable DVD player inside the car, but it wasn't
actually running at the time. Can I tell you something,
What's that The driver of the truck who went to
check on this driver out in the field as he approached,
he said that he did hear a Harry Potter movie playing,

(13:30):
but did not see it. But that's the way he
that's the only way that he was able to discern
that it was a Harry Potter movie. He could hear
the audio from that, because I guess it's and if
you saw the wreck, it's a mangled mess. It's it's
a it's a horrific car crash scene. Um. But he
was one of the first people on the scene, of course,
and he did, in fact here a Harry Potter movie playing.
So it appears, at least from the limited information that

(13:53):
we have it appears that that this was a case
where the person behind the wheel was not following the
directions stated specifically under engaging the autopilot that you you know,
need to have your full attention on the road and
have been ready to take over. Um, that's what it
looks like. Also, we should say that, yes, the optical

(14:16):
camera system failed in that it did not recognize the
fact that there was a truck crossing its path, which
is that's pretty dramatic. But some people have said, well
aren't there backups, like beyond optical, wasn't their radar? Well, yeah,
and as radar. Here's the thing, though, the trailer is
off the ground right, it's it's above the ground because

(14:36):
you've got that space between you know, where the chassis
is and the trailer a part. And the problem is
that the radar system can misinterpret that to be an
overhead sign because it's it's detecting a surface, a flat
surface that's above the ground. And Tesla autopilot essentially ignores

(14:58):
those because if it did, and every time we went
down the highway, your car would start to break every
time you start to approach an overhead sign. If it
has identified that as a potential collision could be a
real problem. Yeah, it could mean that you're actually causing
accidents because you're you're hitting the brake on a highway
when everything is perfectly fine. Uh So, I mean you

(15:20):
could argue that maybe this this was at least partly
a failure of technology in the sense that the optical
cameras did not pick up the truck. Or it could
be that the car was just traveling so fast that
even even at that advanced speed that computers can react to,
it would not have been able to react fast enough

(15:41):
at detecting the truck and being able to do anything
about it. Now you're hitting out. Something that I wanted
to to make clear here too, is that when we
first talked about this, when we first heard about this ourselves,
I have thought that I had imagined that this vehicle
continued on down the road and you know, the autopilot
remain engaged, and um, you know, it was it was
like it was almost like a ghost car, ghost ship

(16:04):
traveling down the road with no one, no one at
the wheel of course, And um, I thought I thought
it was a much greater distance that had traveled after
the accident. But now that I'm hearing that. You know,
he's traveling at such a high rate of speed. And
I look at this diagram now, I know it's not
to scale, but it's really if if it's anywhere close
to the way it's drawn here. Um, you know, it's
not like it's five miles down the road. It's right

(16:25):
there where this car came to rest. If he's traveling
that quickly and something like the top is being sheered
off the car in the grand scheme of things that
it's not like a full on impact into a wall
or anything that's that's it's relatively easy to take the
top off of the vehicle. It's mostly glass and a
few beams. That's it quickly handled by you know that
type of speed. So for it to just travel that

(16:45):
far on its own momentum, it's own you know, the
chared over velocity. I guess, um, I could see a
car taking a quarter mile to come to arrest from
something that that uh, that fast and not quite as
abrupt as a as a as a dead on him
packed sure, and you know it's I think we do

(17:05):
have to stress again. Tesla has always maintained that autopilot
is really a collection of integrated driver assist systems that
together make it safer to operate a vehicle, and they
have data to back that up. The big, the big
point that they like to mention is the number of

(17:26):
miles driven by vehicles in autopilot mode without a fatality.
This was the first fatality of anyone driving a Tesla
vehicle in autopilot mode, and they say that the cars
had accumulated something like a hundred thirty million miles of
travel without having a fatality, which is above average. Right, Yeah,

(17:49):
because in the United States, again, according to Tesla, they
cited a number saying that it's nine million miles for
every particular model of car. That's the that's the average,
is that every nine million miles driven by a vehicle,
there's a fatality. Yeah. That's in the US, the US.
Globally it's different, Yes, sixty it's it's it's sixty million

(18:11):
miles globally, which is the global numbers. I've got global
numbers if you want to hear. Yeah, they're huge. Alright.
So globally auto related deaths per year, one point three
million people died per year in auto related deaths. And
that's an It says an average of about it's just
just under thirty three hundred deaths every single day from

(18:32):
auto related deaths. Now in the United States, it's significantly less,
but it's still a high number. Um. It is it's
estimated in we still have estimate numbers. I think the
latest year for concrete numbers is probably twenty fourteen or
thirteen even um it's estimated that thirty eight thousand, three
hundred people are killed every year in the United States
alone on the roads, and that's about an average of

(18:53):
a hundred and five per day. So this is one
accident that's happened with autopilot, and it gets so much attention,
it's so much, So much focus has put on this
one accident because of the autopilot element. That's that's been
um you know, put out there into the media, and
a lot of people are of course focused on that
one thing. And I would guess that Elon Musk is uh,

(19:15):
you know, kind of scratched his head saying like, well,
this statistically it's about to happen. I mean, of course,
we don't want a car that is going to have
this occur. We don't want it to happen again ever,
but statistically it's going to happen, and and accidents are inevitable, inevitable,
and so our fatalities. There's just there's no way to
create a one death proof vehicle. You can't do it.

(19:37):
There's always gonna be some wild card that you just
can't account for. You can't there's no way to anticipate
every potential scenario. And in fact, that is an important
thing to point out, is that this this situation, that uh,
this tragic situation was kind of the perfect storm of
elements for this accident to have happened. It should in

(19:59):
most case, it should not have happened. And in fact,
you know, people have criticized tesla Um. Some of the
journalists have criticized tesla for uh not just for the
the fact that autopilot appears to have played a part
in a fatal accident, never mind the fact that there
so many other fatal accidents happening with so many other
vehicles that uh, you know that well, they don't garner

(20:24):
the attention because this one, because it's not as novel
an idea as autopilots as the first. But not only
did they they criticize them for that, but also and
I can understand this as well, the fact that Tesla
did not disclose the accident until after it had had
this shareholder meeting where it had offered more stock and
made more money. Uh. The company's response was that the

(20:46):
accidents not material to Tesla's business and in fact, you
would never expect other automakers to have to do the
same thing. It's bad optics, you know, it doesn't look good.
But there's also you know, again there's no precedent for
this by any this is the first time this has happened,
which automatically means that there's definitely going to be more

(21:08):
interest anyway. So I given the fact that it is autopilot,
I lean a little more toward it might have been
better for Tesla to get ahead of this rather than
to talk about it a month more than a month afterward,
but with a very logical explanation of why it happened
and that likely will happen again. It's just this is

(21:30):
simply the first time it's happened. And then also he
could he could have stressed you know that, yeah, we've
we've achieved a hundred and thirty million miles to travel
without a fatality at this point. That's that's again way
above average. Yeah. In fact, you know, he says like, well,
if you extrapolate, which, by the way, I'm just gonna

(21:51):
say what Musk said essentially. But this extrapolation, I completely understand,
is not uh not an apples to apples kind of thing.
Must as well, Let's assume that in two thousand fifteen,
the Tesla autopilot feature were universally available on all vehicles globally. Globally,
that would mean that there would be five hundred thousand

(22:11):
fewer deaths in two thousand fifteen due to car accidents.
I don't quite understand that at all. I mean, okay,
the global figure that I just gave you is one
point three million, so I can understand the half of
that point. But you can't say for sure that an
autopilot system would have corrected or made adjustments so that,
you know, half of those people would still be alive.
But yeah, it's interesting because here I understand how he's

(22:33):
doing his math. He's saying, globally speaking, there's one fatality
for every sixty miles driven million sixty million miles driver.
That's why I'm I'm sorry, that is a very important distinction.
That's okay, huge, huge factor there sixty million miles driven,
so one fatality every sixty million miles driven. With Tesla
autopilot records, there's one fatality for one thirty million miles driven.

(22:57):
Therefore that the Usla autopilot is twice as safe. Fuzzy math,
very fuzzy math. But but we get this point saying
that the driver assist systems, when used properly, are they
really do improve the safety of a vehicle. So it's true.
That leads to the the discussion about why are people

(23:21):
not using this properly, Why are people and whether this
was the case with Joshua Brown's particular instant or not,
Like we suspect things, but we don't know for sure.
But we do know for sure that people have been
jackasses in Tesla vehicles running on autopilot because their YouTube
videos of it. Absolutely and unfortunately for for this this case,

(23:45):
for this guy, uh, there are videos of him engaging
in some of these activities. I mean, I've seen videos
of people, not necessarily uh, Mr Brown, but I've also
seen people you know, that are in traffic and autopilot
mode taking a nap, or playing cards, or playing guitar,
not air guitar, but playing guitar, uh, doing all kinds
of things. I mean just you know, facing sideways instead

(24:06):
of forward. Um, it's just it's I guess it's fun
to get online and kind of show what you can
do other than driving in rush hour traffic. That's that.
That's the point is is a lot of these videos
that a lot of these videos are making is that
look what I'm doing, I'm making use of my time
aside from just driving home my laptop out, I'm still
working or whatever the case might be, shopping. Unfortunately, Joshua

(24:31):
Brown also has some of these videos out there that
shows him him engaged in some of these activities. And
you know the thing is these cars, especially the Tesla,
they have of course engine control modules, lots of modules
in this one in the Tesla models. That's going to
tell the tale. That's going to say, it's kind of
like an event recorder at a black box, if you will,

(24:52):
for for autmobile, and it's going to, uh, let us
know what Speedy was traveling, exactly what happened before and
and after this accident immediately after. It's going to also
let us know when the last time there was any
kind of driver input into the system and all this
can be read via code, and if somebody was careful
enough to remove those modules properly, which I hope they were,

(25:13):
will be able to kind of piece this all together,
you know, in reverse order, and figure out what happened.
And it may be a while and or possibly even
never before we hear the details of that investigation, and
they may may keep that tight. Yeah. So, but it's
it's again, it's one of those things where we see people, uh,
disregarding Tesla's very clear message about what autopilot is intended

(25:37):
to be. I have said a few times this past
week when talking about the story that part of the
issue to me is calling the feature autopilot in the
first place. Not again not assigning blame to Tesla, but
just saying that I think some people are taking this
word autopilot and they're thinking it means more than what

(25:58):
it's supposed to do. We got to think that autopilot
is really a collection of systems. They're integrated, so it's
that's pretty cool, but they're they're systems that have been
found on on high end vehicles for the past few years, uh,
that are integrated in such a way that almost feels
like the cars really taking over. But it's not not

(26:19):
quite there yet, and Tesla has maintained like, no, it's
not an autonomous type of thing. It could do a
lot of stuff. And and if your hands are on
the wheel and you're paying attention, it may feel like
I don't even need to be here for this to work.
But but but the reality is that if you see
a semi trailer trailer in tractor crossing the road in

(26:39):
front of you, you're going to react by touching the
brake or swerving to avoid that that vehicle. You're going
to see it ahead of time. I I just have
a gut feeling that there were no eyes up in
this situation. There were no hands on the wheel at all.
There was probably no driver input for quite some time
in this case. And I just again just a gut feeling.
But I mean, just that the way the accident happened,

(26:59):
I can't see it the other way. And that's of
course this is all borrowing any kind of medical situation.
But I think, but I think we would have heard
that by now if that had been the case, if
there was like a heart attack or a stroke, or
a medical situation was so dire that he was unable
to interact with the vehicle in any way. I think
that we would have heard about that by now, but

(27:20):
you never know. I mean, we'll have to let all
that kind of come out in the wash, and it will.
But I mean, you gotta understand that this is going
to happen in other makes and models of cars as
well that have systems that are autonomous like or you know,
autopilot like, and that you know, um, the Mercedes that
will do the same thing, or the bmw s that
will do the same thing, or the Chryslers or any

(27:42):
of these cars that can drive themselves in traffic. People
are going to kind of take that to the extreme
and and not pay attention. It's like, um, you're just
taking advantage of the situation, right and the you know,
there's a very real fear in the technology sphere that
this will end up creating hurdles for auton amous vehicles
down down down the road. I didn't want to do

(28:05):
a pun there, but I wouldn't have did one anyway.
So but that, yeah, in the future, you're going to
see some hurdles in the way of autonomous vehicles because
of this. It's to me, it's it's apples and oranges,
because an autonomous vehicle, especially if you're taking the route
that Google is taking, there's no way to interact with

(28:26):
that vehicle apart from telling it where you want to
go it there's there are no controls, there's no break,
there's no accelerator, there's no steering wheel, there's nothing, and
that in that case, you have to really convincingly demonstrate
that the technology is robust and capable of responding to
a myriad of situations, well beyond just the typical traffic

(28:49):
situations you would run into on a day to day basis,
but some of the more extraordinary experiences that you could encounter,
because you know, one day may not be exactly the
same as the next. You may have one day where
you're driving to work and everything is pretty much normal,
and then maybe the next day you try to work
and who knows, a poultry truck has just spilled a
big load of chickens all across the road. That happens

(29:11):
here in Georgia. Let me tell you that happens two
or three times a year here in Georgia. Yeah, it's
not that uncommon. Yeah, it's uh. I grew up in Gainesville, Georgia,
poultry capital of the world. We've got a chicken statue
in the middle of town. That's not a lie. Um
and yeah, it happens. It's one of those things where
but but it's one of those things where you if
you're programming an autonomous vehicle out in Mountain View, California,

(29:33):
you're not necessarily thinking, Hey, what happens if chickens are everywhere?
Do we have a chicken plan? Yeah, that says, but
that's the source of stuff you have to take into account.
My my worry is that we're going to see a
disproportionate response from various UH agencies in in the wake

(29:59):
of this accident, which again tragic accident. We hate to
see this happen at all UH and and place a
level of responsibility that is not merited. Yeah, you know,
I think in the short term, my gut feeling is
that you're right. But I think that it won't be
too long before we get back on this full board
where a lot of people I mean sad to say

(30:21):
that they'll do this will become part of Tesla's history,
you know, and they'll say, well, yeah, that happened, but
we then ran two million tests in order you know,
electronic tests in order to prevent that from ever happening again.
And now that situation won't occur ever again in a Tesla. Now,
what's the next unusual set of circumstances that will lead

(30:42):
to a driver death? That's gonna that's gonna be the
next question. Because you can only do so much. You
can't foresee every single circumstances. Well, and I and that
also reminds me, in fact, I should have said this earlier.
But that's a good point in that the Tesla Autopilot
is a beta service. It's it's in beta, which means
that it's in testing mode, it's not the final product.

(31:03):
And in fact, Tesla takes the data gathered from people
driving an autopilot mode in order to tweak autopilot and
make it more effective. So, in other words, this is
a trial of a program that they're still not legally
ready to call even autonomous. Yet people are treating it
as it's autonomous. So it's like it's like two levels
below that. At least, yes, it's it's it's a level

(31:25):
where if you are treating it like it's an autonomous car,
you are you are behaving in a way that I
would argue as irresponsible. Um. And you know, I'm hopeful
that this will stories like this will become fewer and
fewer in number, simply because the improvements in technology have

(31:47):
us avoid some of these accidents where we never have
a story, right, Like, you're not gonna tell, hey, seventeen
more accidents didn't happen today. That's not a story, right
unless you had some sort of we're definitive proof of it,
you know who. I want to have autonomous systems or
autopilot systems every driver around me because recently, I don't

(32:08):
know if this is a thing everywhere. I would guess
that it is because it distracted driving with smartphones, et cetera.
Everything else. Uh, there have been several times in his
last five seven days, maybe I've had several occasions where
I've had to make an invasive mover at maneuver as
the vehicle in front of somebody who is coming up
and behind me with a device in their hand. And

(32:31):
it's it's just it's hard to try to maintain focus
on what I'm doing when I'm having to watch what
people in the lane next to me you are doing,
because they're they're constantly you know, two wheels over the
line or um, you know, someone behind me is just
every time late, breaking and nearly touching my bumper. It
drives me crazy. If they had an auto or an
autopilot system, or even an adaptive cruise control that was

(32:53):
capable of stopping the vehicle like Mercedes or you know
BMW have, I would love it myself. I would run,
remain in control. But for everybody else, I have a
feeling that what you've just described as something that is
universally wished for that everybody but me. I think that's
the way. I know how selfish. That's I'm of the

(33:13):
everybody period, But you know I don't try. Yeah, well,
there's something to be said for for that as well.
You know that. You know, if everybody had the system,
it would work out that way, right, And there's so
many that are like that. If if it was one implemented,
a lot of things would be a lot safer. Yeah,
a lot more boring boring, you say, but you could

(33:35):
get to where you're going with less traffic. Is it
more for yeah, well, you would be able to fill
your time with entertaining things. Is it more involve being
in a car more boring or boring. Er, it's more boring,
more boring. I can tell you what's more boring is
us debating the word. So yeah, we're gonna wrap that
part up the discussion. So ultimately, uh, I'm hopeful that

(34:01):
that Tesla autopilot can continue to evolve over time become better.
I'm hopeful that more people really take it to heart
that their level of responsibility has not changed when they
engage autopilot, that that system is there to improve their safety,
but not replace them as an actual operator of the vehicle,
not entirely at any rate. Remain vigilant behind the wheel. Yeah,

(34:26):
don't don't create more terrible stories like I feel for
Mr Brown's family and I am. I really wish that
this story had turned out a different way. And of
course there's a second story that we could talk about
that um, but we don't have all the details. There
was one another accident that happens to life first with

(34:46):
the Tesla Model X. Yeah, that's right, that's their newest
first that's the SUV looking vehicle, and that was on
the Pennsylvania Turnpike. Have you ever been on the turnpike? Yeah,
you know what it's like. Yeah, my wife's from Pennsylvania.
So I we've also been on it, and the story
in that case was that supposedly they had turned on autopilot,

(35:06):
the vehicle struck a guardrail on the right hand side.
The vehicle then veered across to the left hand side,
hit the concrete median in the middle of the road,
and then flipped over. Yeah, this is crazy because this
is not like, um, you just simply didn't detect something
above the road surface. This left the lane. Yeah, so
that's something totally different. We're going to find out a
lot more about this one, so we and we don't

(35:27):
have the details yet. We do know that Tesla initially
said it doesn't look like autopilot was necessarily involved in
this particular accident, and then they kind of revised that
to say that the data so far has not supported it,
but that they are investigating it. Um And to be honest,
we don't have the full information. Right, We've got the uh,

(35:49):
the first hand account of the driver. Both the driver
and his passenger escaped from They didn't I'm sure they
were injured, but they were They did not die. Maybe
cut some bruises or something like that. But yeah, it
wasn't as quite as a um traumatic I guess as
the other one was. Of course, so we we don't
have enough information on that at the time of this

(36:09):
recording to really dive into was this actually a failure
of autopilot? Because if this were a failure under totally
different circumstances, then you have to start asking some pretty
tough questions about Tesla's technology. Can I just make a guess,
I'll make a guess. An irresponsible guess is that somehow
the driver disengaged the system. Because the lane departure system

(36:31):
is something that's been around for so long and they've
got it so nailed down, really that something like that
that part of the of the autopilot system with everything
else that it has to to think about. If it
just simply left the lane because it couldn't detect the lane,
that's that's uh, I don't it seems like something is
up there? It would you would think it would give
an alert to the driver to say, like bing or

(36:52):
something like, hey, you know, make sure you're taking control,
because you know, I've heard of I've heard of these
systems having a little bit of trouble at it at
sunrise and sunset detecting detecting white lines on the road
because of the glare, and maybe that's a you know
factor in this. I would imagine such a thing like
here in Atlanta, if it's if it's been raining, uh,

(37:15):
and then you get sunshine. It is really hard to
see some of the lines on some of the roads
here in Atlanta. I do if I feel. Yeah, so
when you start feeling that bump of the reflector, Okay,
that's fair, I got you, all right, But again that's
all irresponsible speculation. Yeah, exactly. We we don't know. So

(37:35):
so while we're making some wild guesses here, by the
time this episode comes out, maybe there'll be more information
that will be available, and perhaps if if it actually
is ultimately an autopilot issue, we'll revisit this and talk
more about it. But for now we're going to transition
over to a different story. And this was one that

(37:56):
you brought to my attention, Scott. This um. This this
p posal that was part of the European parliament um
where they want to talk about robots and and part
of the proposal that's got the most attention is the
concept of electronic personhood for robots. Yeah, and that's what
I came to you with because I wasn't sure if

(38:18):
this is going to be in a podcast episode or not.
I just thought it was kind of an interesting little
news blurer. But I suppose and and I'll be I'll
be right up front with you when I say that
I have more questions about this than answers. Really well, Scott, thankfully,
I have printed out the entire report and I have
read it, so I am happy to answer some question.
Have you? Will you read it to us? Now? Slowly?

(38:38):
Twenty two pages? Let's begin. Actually, I do want to read.
I do want to read the first paragraph very briefly,
because I love it so much. So this is the
this is introduction section A. Whereas from Mary Shelley's Frankenstein's
Monster to the classical myth of Pygmalion, through the story
of Prague's Gulam to the robot of Carol Keepec, who

(39:01):
coined the word, people have fantasized about the possibility of
building intelligent machines, more often than not androids with human features.
I would never have expected that to make it into
this draft. Not only that's not the only fiction reference
made in this So a famous, famous fictional piece about

(39:26):
robotics is mentioned, which is, of course Asimov's Laws of robots.
Have you heard of Asimov's Laws of robotics? So basically,
in case you're not familiar, first law is, a robot
may not harm a human or through inaction, allow a
human to come to harm. Second law is a robot
must follow the commands of a human unless it were

(39:48):
to violate the first law. Third law is a robot
must protect itself from harm unless it would bring it
into conflict with one of the first two laws. And
these are critical in this argument because they're going to
treat them as if they're human with uh, not only
not only rights, but also responsibilities. They're going to hold
them accountable for their actions, which is their program by

(40:12):
humans at this point. So what what is this? Where
is this all sort out to here? Because the idea
that a robot that's in a factory creating a part
or you know, an automotive part, the idea that that
is soon to be considered an electronic person. Uh, it's
it's crazy too. It seems insane to me. But the

(40:34):
goal here, as we'll talk about in I guess a
little bit more in depth, is that they'll be taxed,
they will pay into a social security system that will
then be drawn from by humans, but not robots. Robots
don't get social security. Well, yeah, you get compensation, all right,
So these are these are it's true. I'll tell you

(40:54):
about they get compensation proposal. I'll tell you we're talking maintenance. No,
is that a compensation? Really? Know what it is? Their
Their compensation is money money they get paid. Well, okay,
so that goes to the owner of the robot. But
the owner of the robot is the one that's paying
into the social security right, the owner of the robot pays.

(41:16):
The owner of the robot pays the robot. Well, this
sounds like a real screw drive. You want to know why? Alright,
here's here's here's the reason why this gets so it
gets clear once you start to understand the logic, because
at first, on the face of it, you're like, this
is ridiculous. Why would an owner have to pay in
for social security for a robot which is never going
to draw on social Security? Yes, why would you ever

(41:38):
pay a salary to a robot? Well, the compensation fund.
First of all, I was being a little unfair. That
was one possible solution to a very real problem. That
very real problem being when you get to a situation
in which a robot causes damage or harm, who is
liable for that damage or harm? And how do you

(42:02):
compensate the harmed party? And so one of the arguments
was the more autonomous a robot is, the more it
is able to interact with its environment in new ways,
like use machine learning to adapt to its environment and
work within that environment, the less liable you can hold

(42:22):
the manufacturer of that robot if it's not something that
is directly tied to a very basic function of the robot.
You could argue, while the robot learned something, it learned
a poor way of doing that thing, someone or something
was harmed as a result of that. The robots actually
at fault, not the producer. This is a self aware issue,

(42:45):
not even self aware, but so much as like like so,
let's say there there's a famous thing in Stanford where
uh computer scientists set up a computer with artificial intelligence,
and they had to observe a pendulum as it swung
back and forth. Through observing the pendulum, the computer was
able to figure out the basic laws of motion just

(43:07):
by observing the behavior of the pendulum swinging without being
given any information about the laws emotion. Now extend that
to an idea for a an artificially intelligent device. It
doesn't have to be self aware. It just has to
be able to observe things within its environment and adapt
to them. Another great example would be let's say you've

(43:28):
made a robot, and you put a robot on a
on an assembly line, and the robot needs to put
together a certain thing. And then as the robot attempts
to put the thing together, at first, the robots pretty crappy.
It's not doing it very well. But each time it attempts,
it tries something a little bit different and starts to
learn what works better than what worked before. You're not
actually programming the robot to do this. The robot is

(43:51):
learning how to do this through trial and error, essentially,
just as a would on the same line exactly. And
most of the time the robots we talk about an industry,
they don't fall into that category. They've been designed to
do a very specific series of actions repeated repetitively, like
that's it, that's and that's all they do. They don't.

(44:11):
They don't start putting together a car and then turn
around and flip burgers that's not the purpose, the idea
being with these new robots, they'd be more adaptive and
could therefore change the way they do things in order
to try and experiment do something a little more effectively
if in fact, we get to a further stage where

(44:31):
robots are able to adapt even beyond that level. Because
really this proposal is more about we're heading down a
road eventually we're going to need to have legislation to
to handle these issues. It would be better for us
to think about it now than when it becomes necessary,
like like when a problem happens, we have to figure

(44:52):
out what to do about it. It's more about anticipating
those problems, all right, I see now. But then the
knee jerk reaction to this from a lot of people
is that, uh, you know, the robots are taking my
job and also they're they're not paying Social Security and
so so what's gonna happen in the in the end here,
we're gonna have a uh, we're gonna have an economy
collapse because it's gonna be all robotic workers in the

(45:14):
in the factory or going to have massive amounts of
unemployment across the board. Um, and no one's paying into
the system that already can't sustain the number of people
that have retired or are drawing from that right now,
what's gonna happen when I'm in that position, because it's
gonna be an entirely robotic work For this is like
the the the the just the idea that they have,

(45:36):
you know, it's not it's not the reality. Really. The
reality is that as the as the robots are slowly
being implemented in the workforce, humans are also increasing in
the workforce, but in different jobs. So the robots maybe
taking this position away slowly. And I've got a few
numbers here I can mention in a moment um slowly.
It's it's happening very slowly for for this um, you know,

(45:59):
like the end of the world scenario that they're thinking of.
You know that um not into the world, but you
know what I mean, Well, yeah, the idea, the idea
of capitalism folding in on itself. Yeah, And and they're
they're thinking of it happening almost immediately overnight. And if
it's something like that did happen overnight, that would be
horrific for the the economy would collapse almost immediately. It
would be really really difficult for for the system to

(46:20):
keep up. So it's a much more it's a it's
a much slower implementation than what people are thinking it is,
and and that humans are in fact finding jobs in
other positions and places that you know then keep them
gainfully employed. And it's just it's a balance. It's all
a balance right now. But if things were to happen
quickly as I said it would, it would throw things

(46:43):
into turmoil. But it's not happening that way. Yeah, it's
not happening that way right now. The question is will
it always be more of a gradual thing or will
will there be a tipping point? Right like will there
be a development in robotics and artificial intelligence where we
get to a tipping point where it's a rapid implement
intation and deployment of automated systems across multiple jobs. Like

(47:04):
we are seeing certain jobs get more automation involved in them,
not all of them stay that way. Sometimes a company
will go in with an automation strategy and then ultimately decide, hey,
this is not actually working out the way we thought
it would, and they back off of it. In other cases,
we see more and more automation go into that space.
And so what what this proposal is trying to do

(47:26):
is say, in the case where perhaps we see more
jobs going away from automation than our generative through automation,
we would need to have a method to respond to that.
Here are some potential ways we could do that. But
they also actually acknowledge this is not necessarily the case.
In fact, one of the proposals in the draft is,

(47:47):
by the way, the draft report, I should read out
the title Draft Report with recommendations to the Commission on
Civil Law Rules on Robotics in case you were wondering
if there were such a thing, uh they in the
draft if they actually state that one of the things
that should happen is the European Union should create an
agency or or a department that actually monitors the number

(48:11):
of jobs in the EU that are quote unquote taken
by robots and the number of jobs that are created.
Because they said, we we lack this right now, we
don't actually have a dedicated group of people that are
monitoring these job trends. And and in fact, if we
continue to see more jobs created than are being taken,

(48:33):
that becomes a non issue, right like, but we don't
know is the problem I found those numbers. If you
want to hear what I want to hear, So well,
just I guess a precursor to this is that, Um,
in that same draft you're talking about, the European Parliament draft,
the Committee on Legal Affairs said that organizations should have
to declare savings they've made in also savings they made
in social security contributions by using robotics instead of people

(48:56):
for tax purposes. So that amount then is going to
be applied to tax. They're will be taxed on that amount.
So um, well, okay, anyway, let's move on to this.
There's no proven correlation between increasing robot density and unemployment.
That's what I was getting at before, is that By
pointed out that there's a number of employees in the
German automotive industry. Um that, in fact, well, the whole

(49:18):
German automoive industry by rose by thirteen percent between two
thousand and ten, and rather well, industrial robots stock in
the industry rose seventeen percent and in the same time period, So, um,
humans up thirteen percent, robots up seventeen percent. Sure, but
they're they're right now coexisting. Yeah, and it's not like
ones declining, the other ones increasing, you know, exponentially. It's

(49:40):
just they're they're kind of growing at the same rate.
It's just getting bigger, that's all. And I think I
think part of the issue here is that some people
who are commenting on the on the report have probably
not read it, at least not all the way through,
because I when I read it, the feeling I got
was not so much that they were saying, Uh, here's
the problem that we need to solve right now. It
was more like people have talked about the possibility of

(50:03):
this thing happening. If this thing in fact happens, we're
going to be in trouble unless we plan for that eventuality.
And they say, here are some ways that we could
at least have have systems in place to to make
that transition less painful. And the framework of time that
they're talking about here is a decade or even decades

(50:26):
in in reality, because they said it's not going to
happen overnight like we've been saying. And the other they're
The quick thing that I want to mention is that,
as you said, most people haven't really read the draft
or what it read it really entails, but a lot
of people right away. Will say, oh, well, my toaster
can make make toast on its own, and my refrigerator
knows when I need to stock it because it's a
smart refrigerator. You're gonna charge my smartphone like that's a

(50:48):
person as well? Are you gonna you know, tax me
for uh you know all the devices, my my thermostat
that knows when I'm coming home and makes it warmer. Uh,
you know, where does this end? What's the what's the ends?
Like who's going to be tax and why? Yeah, it's
it's interesting. They actually in the report mentioned that there
needs to be a new classification either either the whole

(51:09):
electronic personhood thing. They said that, uh, once we see
robots received reach a certain level of sophistication, they have
gone beyond what we would typically refer to as an
object like a toaster. They say, yeah, a toaster is
an object. You're not gonna treat a toaster like it's
a sentient creature or even even a robot with advanced

(51:32):
uh you know, artificial intelligence doesn't necessarily mean it's sentient.
And they aren't even saying we should treat them as sentient.
They just say we might need to come up with
a new classification, something that's not human or animal, but
something with a legal classification to deal with robotics in
a manner that is consistent with just legal proceedings, because

(51:55):
there's not anything yet well, and one of those parameters
would have to be something that directly takes the place
of a human on an assembly line or in a
fast food restaurant or you know, wherever it would be.
It's not it's not something that it's not like a
device that you use would not be considered like you're
taking a job away from somebody. You don't have somebody
your house that makes toast for you, and and now

(52:15):
the toaster is automatic and it's taking away the toaster's job.
It's not that way. It's it's it's a device that
you normally would have to operate manually, but now it's
just a smart device and it does that for you.
It's just a it's a handy thing. But it's not
it's not taking away employment from somebody. Yeah, it's uh,
I mean, I think that particular idea is one that uh,

(52:39):
some technologists have really been bandying about as a possibility
like a decade or two decades out. You know, something
where we actually see more and more automation and artificial
intelligence taking the place of jobs and fewer and fewer
opportunities for humans. It's not like it's something that is
um imperative right now. But again, I think that the

(53:01):
people who drafted the report are doing it in a
way that is fairly responsible in the idea that there
they are looking ahead, like they actually talk about how
there they need to look ahead ten to fifteen years
and and anticipate what skills are going to be needed
in the job, which is really hard to do. Oh
my gosh, I was just gonna say this is impossible.

(53:22):
But because look back fifteen years ago and see where
we were, it was entirely I mean not not maybe
fifteen years ago. It looked back thirty years ago, it's
entirely humans doing all that stuff. And it wasn't and
there were there may be a few simple, simple robots
doing things. But look at an automation or you know,
an automated assembly line now in an auto plant. It's
unbelievable what they're capable of and the number of robots

(53:43):
versus the number of humans doing that. But I one
thing I gotta get out of here is this is
this This opens up so many boxes of worms in
that we're not talking about many other things here. This
is strictly this issue. But uh, there's gonna be the
whole you know, what are we paying these people to
do the jobs versus what we pay the robots to

(54:03):
do the job, which is not really paying them. It's
it's more like paying for the robot to do the
job and programming and then maintaining it. And then what
does it due to the price of the product? And
why is that in there in the first place, Because
they want to lower the price of the product by
not having that human that they're gonna have to pay
Social Security to for thirty years after retirement. The robots
don't do that. So there's there's so many layers of

(54:26):
this that we can argue back and forth, and it
gets very political, it really does. Yeah, I I imagine
at least that some of the arguments they've made for
things like the compensation fund for the robot, that's really
more of a break glass in case robot goes berserk fund.
I mean that's really what it's meant for. The compensation
fund is really meant for a Hey, if your robot
happens to go biserk one day and rampage down the

(54:49):
street and slap kids around. You're gonna be paying some money. Well,
this is this is the safety fund for that. What if?
So here's here's maybe imagine this scenario. Let's say robot
that's supposed to in stall the front window, you know,
the glass in the vehicle. What if it just starts
flinging glass all over the over the factory and and
does injure human or it does, you know, slice through

(55:09):
several lines of hydraulic lines for the other robots. How
do you come for that damage? That's well, let's see
and and it's one bad apple. One of the things
they mentioned was that so the conversation fun, like they said,
was a possible approach. The other thing they talked about
was obligatory insurance, similar to what you would have with
car insurance, only in this case it would be the

(55:29):
company's producing the robots that would be responsible for paying
that insurance, not the people who purchase or own the robot.
All right, I know that was a ridiculous example, but
it's perfectly it's it's perfectly a reasonable example. It goes berserk. Yeah,
you know what happens. They're saying, like, you know, as
we get to a point where robots are in are

(55:50):
able to act more autonomously within an environment, they're going
to encounter variables that you could not This goes back
to Tesla Autopilot. They're going to encounter variables that you
would not have expected when you were first programming office politics. Yeah,
you know, maybe maybe something uh simple, maybe it's something
that is that hardly ever happens, but happens at this

(56:14):
one point, And how does the robot behave in that situation?
And and there's their argument is that they're going to
be cases where it is truly unpredictable that you cannot,
just from its basic programming, necessarily anticipate what the robot's
going to do. Like say, like it's after work and
one robot hits on the other robot's girlfriend at the bar,

(56:36):
and then Monday, there's a lot of tension between the
two of them. Yet at work it's going to be
a difficult situation where they are standing around the w
D forty cooler. That's something you just can't account for
right now. Well, I mean, like like specifically, the the
passage that I keep kind of referring to here, it
says considers that in principle, once the ultimately responsible parties
have been identified, their liability would be proportionate to the

(56:59):
actual level of structions given to the robot and of
its autonomy, so that the greater robots learning capability or
autonomy is, the lower other parties responsibility should be, and
the longer robots education has lasted, the greater the responsibility
of its teacher should be. All right, so this should
have a lot of robot programmers shaking in their boots

(57:20):
right now, because that really ups their responsibility for this
this machine, which is already great, but that will extend
into if it learns how to adapt to the job.
And then but then then it's the robot's fault as
long as there as long as you can make the
robot autonomous enough, then you're like, I'm free. This is
like having a kid. I'm responsible for my kid up

(57:43):
to a certain age, and then after that it's the
kid's fault. I guess. So I mean it's like, well,
when when the robot turns eighteen, maybe that's that's when.
But that's the idea, that's the basic idea of the
idea is that if if you program a robot so
that it has very little autonomy, then you would argue,
logically speaking, the person liable for any damage to the
robot makes is the person who programmed it or built

(58:04):
it or whatever, because the robot can only follow the
instructions that were given to it. If you have a
robot that has more autonomy and more ability to extrapolate
from its environment, then it goes beyond the basics that
you gave the that you equipped the robot with, and
so it becomes harder to say, oh, it's your fault
because your robot did this thing. So it's kind of

(58:26):
like having a kid and a kid growing up and
you're saying, you know, why can't you control your kid, Like, well,
my kid is becoming a person, and I mean I can.
I'm doing the best I can and educating and and
giving discipline and giving guidance to my kid. But my
kid is a person. So occasionally my kid chooses to
do something that you know, we have to have a
discussion and say, listen, this was a bad choice you made.

(58:48):
Here's why it was. That kind of all part of
free will, it is. Yeah, So it's an interesting argument
though that it's like it's like a child that you
have to maintain control over for a certain amount of time.
And then there's a point where you say this is
operating on its own at this point, responsible for its
own actions. And and again the report is very clear
to say that this is stuff that we're talking about

(59:11):
that decades out. It's not like it's not like this
is It's not like we're gonna wake up tomorrow and
our our little remote controlled robot is suddenly gonna be
no back talking us, no no on the lions right now.
It's like you pick up part A and put it
in slot B and then do it again and again
and again for eighteen hours a day. And and the
report is essentially saying like we just want to make

(59:32):
sure that we're not unprepared when when technologology advances beyond
the stage where at now I mean, Scott, we've seen
in the past, like with other types of technology, how
technological development far outpaces the law, and then we have
to figure out, okay, wait, what does this actually mean

(59:53):
legally speaking? What what this draft is trying to do
is get ahead of that and just to say like,
let's set up the legal frame now, so that we
aren't caught in that situation when it ultimately happens. Well,
but it does. It does lead to some jokes. Yeah,
and what gets everybody's dander up is that you know,
they're gonna they're they're calling them electronic persons, and that

(01:00:13):
right away is is kind of a I don't know,
it's a is it not. It's not like bully move,
but it's more like, um, this is definitely something that's
replacing you as as a person in the or at
least joining you. Yeah, at the minimum, let's say, is
joining you. And then the other thing is that I
think a lot of people feel this way too, is

(01:00:34):
that wait a minute, you're gonna tax them as if
it's a human. Isn't that The reason why they may
be replaced that human with a machine is that they
didn't have to be responsible for it after that point,
like once it retires, because these robots don't retire, they
don't die, they have you know, they have mechanical issues
in there, and they are I guess retired in a way,

(01:00:55):
and that they're replaced by another one. But you don't
go on, you don't continue paying for it with a
pension for the rest of the robots, you know, air
quote life, you know, um, it's it's it's strange that
people people want to extrapolate this too, like like it's
a human and that you'll have to care for it
for the rest of its its mechanical life as if

(01:01:17):
it's a human. But that's not the way it is.
Really No, I think it's different. I think really what
they're what they were trying to get at, is that
there are certain systems we have in place as human
beings that depend entirely or at least in large part
on employment taxes, and that if we, in fact do
reach a future which is still not not necessarily the

(01:01:39):
the thing that's going to happen, but if we reach
a future where automation really has taken over to a
great extent, Uh, what do we do to protect those
systems for the people who are still dependent upon them
until we reach a point where those systems are moot
because we've moved on to something else. In fact, one
of the other things they actually mentioned is that the

(01:02:00):
Member States could all start to consider the possibility of
a general basic income, which you know, we've got a
couple of countries out there in Europe that are are
experimenting with this to some degree. Uh. I did an
episode of Forward Thinking's audio podcast where I talked about this,
and I said, I would use that as a way

(01:02:21):
of just seeing what happens. And then, you know, do
you say, all right, it works, let's try it, or
do you say, oh boy, that didn't work at all,
let's not do that. I think we didn't implement it everywhere,
and I think I think automo manufacturers are gonna hate this.
I think it's gonna be bad for their business. I
really do. They're they're not gonna like this at all,
because that was the benefit to them of having something

(01:02:42):
that robot. It doesn't it doesn't have the complaints, it
doesn't have the you know, the same concerns. Right, you
have the you have the upfront cost, and you have
the maintenance cost of the robots, but you don't have
these ongoing you don't have a salary, you don't have benefits,
you don't have these other taxes and other elements that
you would have to have if it were a human workforce.
And if you start adding all these things that they

(01:03:04):
have to pay on the on the end of the
owner of the robot, you know, the founder of the
company or the the owner of the company, whoever that
may be. It maybe a group of people. If they
have to pay more for that, you better believe that
that price is going to be passed on to the
consumer and the price of the price of your automobile
is going to go skyrocketing. And it just leads to
a lot of trouble. I mean, there's gonna be um,

(01:03:25):
there's gonna be some issues. I think if they if
they really do go forward with this, and it's just
a draft now it's an I said, it's kind of
getting it in people's heads that it may happen. But
if they really do go forward with this, it's gonna
cause a lot of trouble. I think. The I think
the important thing to remember is it's not just that
it's a draft report, but also that this is a
report that's giving recommendations, and most of those recommendations really

(01:03:46):
boiled down to let's form an actual official European agency
that is going to oversee this stuff and staff it
with people who are experts in their field, not just
technical experts, but regulatory experts, ethical experts, to do to
to do a lot of the legwork, to make sure

(01:04:06):
that if any legislative aims are are created, that they
are done so with the most information from the most
the most diverse perspective possible, rather than you know, this
seems like we should try this, let's do this, let's
go this way. You be ready, but take your time
getting there. And I think that that ultimately is the

(01:04:28):
message of the report. It is a little whimsical in places,
the fact that it references fiction. UH you think I
love it. I think it's great. It was one of
the things that you can word to me and said,
can you believe this? You will see to me this
is you're excited. But I do a show called forward
Thinking you know where, and so I know how hard
it is to try and predict the future. I have

(01:04:48):
to do it every week fw Cohen thinking it's a
sponsored by Toyota, this show is not, but forward Thinking
is anyway the UH doing. Doing the show air, we
talked about how it's funny because in those episodes when
we talk about these big, big ideas, things like what

(01:05:08):
happens if a UH an artificial intelligence becomes sophisticated enough
that it is difficult too to differentiate it from an
intelligent creature to the point maybe it has some form
of self awareness. Maybe it doesn't, but maybe it seems
like it does. Like, that's a difficult question. What happens

(01:05:29):
if automation were to take over more jobs than it
could create? What if we were you know, the all
these different what if questions We always conclude with, you know,
we don't really know, but it really is important that
we talk about it and discuss it and start thinking,
you know what, how would this actually impact us now,

(01:05:50):
so that we're not caught by surprise what happens later on.
And I feel like this draft report is exactly what
I keep saying and so wild. Part of it is
funny and and you can joke about like your toaster
becoming a person, uh, and I would make the same joke.
I'm also I also admire that someone has taken the
time to do what I keep asking people to do

(01:06:12):
it every episode. I'm like, well, I can't make too
much fun of them. I've been this is exactly what
I've been calling for for four years. It's become a reality. Yeah.
Now I'm just like amused that it happened. Also, I'll
go ahead and tell you guys, like this report is
very easy to read. Uh is twenty two pages long,
and that's after you get past the first couple of
pages of it's less than twenty pages because the first

(01:06:34):
couple of pages are like a cover and a table
contents and stuff. It's a piec cake. It's it's it's
actually you know, I mean, it's written in a little
bit of legal language, but not terrible. Like it's nowhere
near as dense as some of the stuff I used
to have to read when I worked in a law firm.
Those were dark times, my friend. Um, but it's it's

(01:06:55):
a pretty easy read. It's a little scattered because you
feel like some of the sections are repeated, but it's
worth checking out. And it's all available online and PDF form,
so you can go and pull down this read it
see if you you know, think like, is this is
this prudent? Is it? Is it really proactive? Is it reactionary?

(01:07:15):
Is it unnecessary? Should we not worry about this for
another ten years? I mean it certainly could spark at
least a conversation. We should also keep in mind, nothing
in the report is like would become legally binding. It's
really just a bunch of suggestions. It's just just kind
of a prep work work framework, I guess for them

(01:07:36):
to adhere to later. It's right, it's not even not
even so much as like a legal proposal as it is. Hey, guys,
I think it'd be a really good idea if we've
got some smart people to think about this stuff for
a while. That's when it comes down to start with
this and see what you come up with. Yeah, yeah,
pretty much. And they you know the fact that they say,
like this is one possible solution. Uh, there they are
implying at least that they don't know all the answers

(01:08:00):
and therefore they're just kind of throwing something out there,
and maybe there's a much better way of approaching it
than the way that they have suggested. You know. So
European Parliament is a lot like us. They have more
questions than answers on this one. Yeah, but at least
they're asking to get you know, the top thinkers together
to try and answer some of these questions. That's true,
just like how stuff works did with us here to
That's right. That's that's why we make the big bucks

(01:08:22):
Scott get the top thinkers together in the podcast studio. Okay,
that's a little that's too much. We were really only
the only two people here, so it was just there's
no other option. Really, that's right, there's an empty room,
and I just happened to raise my hand. Yeah. Yeah,
it's either going to be you know you are Matt

(01:08:42):
Frederick and man, I can't stand that guy. So I
said that just as he was literally passing the window
I was looking at that. I love Matt. He'll never
he'll never listen to this. I don't have to worry
about I'm listening to this, but just in case, I
love you, Matt. All right. So guys, first of all,
I gotta thank Scott again for joining me on this
show and bringing bringing the European Parliament proposal to my

(01:09:04):
attention because it has given me a wealth of material.
Thank you for asking me into the studio again. With you,
I always have a good time and it's always a
fun conversation. It's a good back and forth between us. Yeah, yeah,
we we have a blast. There's certain people that I
just feel a fun rapport with and I'm happy to say,
at no time during this particular conversation did I set

(01:09:24):
out to break your heart. Yeah you didn't. You didn't
even he didn't physically strike me either way. Sometimes do it? Sometimes?
I mean, you know, it's it's a Friday and I'm tired. Honestly,
I don't think I have the energy. Yeah, you know what,
usually you do? You too, tried to you try to try,
try to sing me with you. Usually I usually withhold
one important piece of information and then see see how

(01:09:45):
you react. It normally has something to do with people
not driving anymore, but I'm not doing that today. So
of course you can go check out car Stuff. Scott
is a co host on car Stuff along with Ben
Bolan and Uh. He does amazing work here at how
stuff Works, so check that out. And guys, if you
have any questions or suggestions for future episodes, write me

(01:10:07):
my email addresses tech stuff at how stuff works dot
com or drop me a line on Twitter or Facebook.
The handle at both of those is text stuff H
s W and I will talk to you again. Really
see for more on this and thousands of other topics,

(01:10:28):
is a how stuff Works dot com

TechStuff News

Advertise With Us

Follow Us On

Hosts And Creators

Oz Woloshyn

Oz Woloshyn

Karah Preiss

Karah Preiss

Show Links

AboutStoreRSS

Popular Podcasts

Dateline NBC

Dateline NBC

Current and classic episodes, featuring compelling true-crime mysteries, powerful documentaries and in-depth investigations. Follow now to get the latest episodes of Dateline NBC completely free, or subscribe to Dateline Premium for ad-free listening and exclusive bonus content: DatelinePremium.com

Cardiac Cowboys

Cardiac Cowboys

The heart was always off-limits to surgeons. Cutting into it spelled instant death for the patient. That is, until a ragtag group of doctors scattered across the Midwest and Texas decided to throw out the rule book. Working in makeshift laboratories and home garages, using medical devices made from scavenged machine parts and beer tubes, these men and women invented the field of open heart surgery. Odds are, someone you know is alive because of them. So why has history left them behind? Presented by Chris Pine, CARDIAC COWBOYS tells the gripping true story behind the birth of heart surgery, and the young, Greatest Generation doctors who made it happen. For years, they competed and feuded, racing to be the first, the best, and the most prolific. Some appeared on the cover of Time Magazine, operated on kings and advised presidents. Others ended up disgraced, penniless, and convicted of felonies. Together, they ignited a revolution in medicine, and changed the world.

Music, radio and podcasts, all free. Listen online or download the iHeart App.

Connect

© 2025 iHeartMedia, Inc.