All Episodes

December 13, 2025 39 mins

Nate has ridden in a Waymo, and it was like stepping into the future. Maria’s never been in one, but she’s been stuck behind a lot of autonomous vehicles… They swap human-driven car horror stories and discuss some of the risks and benefits of a future full of self-driving cars.

From the New York Times: The Data on Self-Driving Cars Is Clear. We Have to Change Course.


For more from Nate and Maria, subscribe to their newsletters:

The Leap from Maria Konnikova

Silver Bulletin from Nate Silver 

See omnystudio.com/listener for privacy information.

Mark as Played
Transcript

Episode Transcript

Available transcripts are automatically generated. Complete accuracy is not guaranteed.
Speaker 1 (00:15):
Pushkin. Welcome back to Risky Business, a show about making
better decisions. I'm Maria Kanakova.

Speaker 2 (00:31):
And I'm Nate Silver today on the show, Maria, have
you ever taken a way Moo or another autonomous vehicle?

Speaker 1 (00:38):
I have, not, Nate, but I know, I know, I know,
I know that you have. We've talked about that before.
But yeah, we'll be talking more in depth about kind
of the whole autonomous vehicle future, the risks, the rewards,
what's good, what's bad, how we should think about it,

(00:59):
how we should evaluate it. I think there are lots
of questions here and lots of things to think about
as we kind of figure out the future of driving.

Speaker 2 (01:06):
We got a little risk, we got a little psychology,
we got a little algorithm TLIC.

Speaker 3 (01:10):
It's gonna be a good episode, Maria.

Speaker 1 (01:15):
So, Nate, you told me about some amazing experiences that
you've had using waymos in San Francisco.

Speaker 3 (01:23):
Yeah. No, I uh.

Speaker 2 (01:25):
The first time I took a WEEIMO, I was actually
at a meeting with Manifold, which is a prediction market company,
just shooting the shit. I'm friendly with those guys, and
they're like, we're gonna hail you Awaimo back to your
hotel in San Francisco, and I'm like, okay, this seems
kind of stupid, right, I don't know, we're afraid, and
like it was fucking it felt like the fucking future.

Speaker 3 (01:47):
Right.

Speaker 2 (01:47):
You get in there, there's no driver, there's like space
age music playing, and like San Francisco is like not
a particularly easy city to navigate. There's a lot of traffic,
there's lots of hills. I wouldn't call SF the most.

Speaker 3 (02:02):
Traffic obedient place. Probably better than like Boston or something. Right, Oh,
I don't think.

Speaker 1 (02:07):
Anything is worse than Boston when it comes to.

Speaker 3 (02:10):
You know, my dad had like it.

Speaker 2 (02:12):
We lived briefly in Boston for a year, and like
he got a falsely accused and like being in a
car accident. There's a Brian Q Silver and he was
Brian D Silver. Anyway, we'll leave it aside family lore.

Speaker 3 (02:24):
Brian Q Silver, Fuck you, Brian Q Silver. He fucking
my dad's credit reading.

Speaker 1 (02:29):
Oh my god, No, that's I mean, this is a
fun like this is a hilarious side of family lore,
but like this is actually a consideration when it comes
to self driving cars versus human driving cars.

Speaker 2 (02:40):
No, it felt like the fucking future, right, and like it,
like I would say, and by the way, there's also
like lots of let's be honest, especially a couple of
years ago, and this happened a lot of vagrancy, homeless people,
sketchy people in downtown San Francisco, right, So like you're
dealing with an unpredictable element that might not be programmed

(03:00):
into some baseline model of like how driving is supposed
to work. But now the WAIMA did like a very
very good job. It's very smooth with the acceleration and
the deceleration. It's a little bit a little bit nitty
about following traffic signals and things like that. I was
you would expect, but it did like lots of I
think fairly saying in rational things, right, like I was

(03:23):
staying at some slightly funky hotel and like it pulls
up on the opposite side of the hotel where it's
like safer to get out and not the officially recommended spot,
and like it just you know, it would be like
in the ninety seventh or ninety eighth percentile of Uber drivers,
I would say, you know, whether or.

Speaker 3 (03:37):
Not you like the conversation, Yeah, go ahead.

Speaker 1 (03:39):
I was gonna say, it's funny that you say they're nitty.
So I'm in Las Vegas right now, and Vegas is
rolling out a lot of autonomous vehicles, not just way Mos,
but what's the what's the other one called zekes There
we go zu zukes all right, So Vegas is rolling
out a lot of autonomous vehicles, including zekes here. So

(04:03):
when I came back for the NAPT the North American
Poker to Our last month, was the first time that
they had done really the full rollout already where you
can hail them, you know as Uber, Left, et cetera. Well,
actually I don't know if it's both Uber and Left. Anyway,
you can use ride sharing services with them, and they're
all over the streets. And I was trying to drive,

(04:24):
and I realized very early on, and this says something
about how close I like to you know, I sometimes
cut things a little bit close when it comes to
making it to places on time. I realized right away
that I did not want to be behind them. Basically,
if I saw them in my lane, I tried to
like get into a different lane because they would not

(04:44):
go a single mile above the speed limit. They would
stop like and sometimes get a little bit, you know, confused.
They were very nitty, very cautious drivers, and I was like,
come on, guys, you can go thirty seven in the
thirty five zone.

Speaker 3 (04:56):
It's not going to kill you.

Speaker 1 (04:57):
It's gonna be okay. But they wouldn't, and which was
really interesting. And I don't know if it's you know,
if it's this particular company, but they were definitely very
nitty drivers, which, by the way, is not necessarily a
bad thing, right, Like they were being they were being cautious,
they were being safe. But for me, I was like, okay,
I want to go slightly faster. Sorry, Las Vegas Police.
I never go more than a few miles above the

(05:18):
speed limit, I promise, But like I got stuck behind
them a few times, and like they wouldn't take some
turns at intersections which I think a human driver might
have taken. Probably it's a little bit safer, Like they
just really play it safe in those scenarios. But I
just want it to be like, Okay, you can go,
you have room, let's go, let's make that turn.

Speaker 2 (05:40):
I mean, there are there are times when it's probably
I don't drive, actually I do bike sometimes, right, like
on the city bikes or whatever. Right, there are times
when violating traffic laws is probably safer. Yeah, you know
what I mean, like mile violations where or just on
the common sentence, so you're trying to yeah, I don't know.

Speaker 3 (06:00):
I think that's really and that's more true in New
York than probably other locations. I would think.

Speaker 1 (06:05):
Now, sometimes you do have to make these judgment calls
because sometimes like you will see something and in order
to be safer, like you will have to violate a
traffic rule. And that's I think one of these broader
you know, as we as we start talking about kind
of the broader risk reward and kind of how this
future looks, and you know what we think about autonomous vehicles,

(06:28):
I think that a lot of the questions do revolve around,
you know, how do you program the cars to actually
make these decisions, because you know, it's not It would
be one thing if we had this immediate transition right
where like you can just go like this snap of
the fingers and all of a sudden, all the cars

(06:49):
on the road are autonomous vehicles. That's very different from
having kind of multiple years of in between where you
have human drivers and autonomous vehicles and you know, more
human drivers than autonomous vehicles in all of these kind
of the transitional period, I think from a risk safety
standpoint and from making those types of trade offs is

(07:11):
probably going to be the most difficult to navigate because
you have to account not just for ideal driving and
what you're supposed to be doing, but what humans actually
do right, and the mistakes that humans make, and how
you share the road with humans, some of whom are
bad drivers, some of whom are impaired drivers, right, because

(07:32):
those risks don't suddenly magically go away. Now, it's kind
of you know, it's funny. We talk a lot about
poker and kind of ideal poker on this show. It's
kind of like, you know, if you have the GTO
cars right, but then you have all of these crazy
and different types of players on the road, and you
have to try to figure out like, sure, you have
to drive in a GTO way, but you also have

(07:52):
to make sure that you are optimizing for the fact
that you have, you know, these loose cannons and all
of these different elements that are still going to be
sharing the roads with you. And unless there's like this
huge thing where all of a sudden, the government's like,
we're going to pay you, you know, tons of money
to give up your car and use autonomous vehicles. Like
that transition, to me is going to be one of

(08:13):
the riskier parts of this whole equation.

Speaker 3 (08:17):
Well, I assume that these.

Speaker 2 (08:20):
Cars are programmed in like find if you call it,
like an exploitive strategy in poker, right, but as soon
they are at least doing some defensive driving, right. I
mean again, San Francisco not a place which is ideal
for traffic laws or anything else, really right, and they're
kind of training these real world environments.

Speaker 3 (08:40):
I mean, look, I mean to back up a second, right, Like,
in general.

Speaker 2 (08:43):
There's this critique that like AI driven systems are good
with simple manipulation tasks, so language, certain types of math, right,
certain types.

Speaker 3 (08:53):
Of games when they're well trained.

Speaker 2 (08:54):
I know we've talked in this program about how poker
if you're not training on poker specifically, might be an exception.
So it is kind of amazing that like that they're
as good as they are, right, But like, yeah, the
thing people are recognize is like the human drivers are
I have lots of problems, you know what I mean,
They're distracted I mean, I was just in where was
I was just in Florida, right, And like.

Speaker 3 (09:18):
In Florida you can talk on your cell phone when
you're driving.

Speaker 1 (09:23):
And like, oh my god, is that true night?

Speaker 3 (09:26):
Oh my god, you do neither you nor there?

Speaker 1 (09:29):
No, no, no, that's actually that's actually both here and there.
I don't remember if I've ever told our listeners, but
the one of the scariest experiences I've had in an
uber was in California. It was in LA. We were
stuck in traffic in the middle of you know, one
of these horrible LA freeways, and my driver was already
like exhibiting signs of strange behavior, and then he just
like stares intently at me in the rear view mirror

(09:52):
and says, do you believe in God?

Speaker 3 (09:56):
That I was like, oh no, oh.

Speaker 2 (09:57):
God, no, oh no.

Speaker 1 (10:01):
Anyway, this is actually very Germane because that is one
of the things that I think self driving, the autonomous
vehicles definitely have a leg up on. You are not
going to have one of those do you believe in God?
Experiences that are going to make you want to immediately
get out of the car and we'll be back right

(10:28):
after this. So I think that one of to me,
one of the big good things about the future of
autonomous vehicles is the fact that you kind of sometimes

(10:50):
like this is evidence for when taking the human equation
out of it can be good. Right, You don't have
to like every time that I'm coming home, you know,
every time I'm going back to Brooklyn from JFK. When
I get back to New York, I end up taking
the taxi, but I get into arguments with the driver
every single time because inevitably they want to take the

(11:13):
Belt Parkway, which is never the best way to go.
You always get stuck in traffic, it's always horrible, but
they always want to do that, and we just get
into a back and forth. I'm like, no, I want
you to do this, and they're like, no, I'm gonna
do that. And with presumably with autonomous vehicles, you know,
a lot of this won't come up, right, they have
the roots and they will kind of they will, they will.

Speaker 2 (11:33):
Do Yeah, I'm a little more sympathetic empirically, and I
don't drive, but my partner does and we drive decently
off the bike. Empirically, I think like when the GPS
says something that like you as an experienced driver in
that area.

Speaker 3 (11:50):
Disagree with I think it's kind of fifty to fifty.
Who's right? Yeah and who isn't? You know what I mean?

Speaker 2 (11:55):
From like little things like knowing, oh, this street is
closed when we used to live near Penn Station. Right,
there's a nixt game starting. You don't want to fucking
drive anywhere near Massion Square Garden when there's a next
game starting. Or I mean, look, obviously, if you're in
a new or a lift, then the estimate of when
you'll arrive at your destination is strongly biased toward optimism.

Speaker 3 (12:17):
You know, this is why I want, like, I want
prediction Mark. We need polymarket nation, we can bet.

Speaker 1 (12:22):
Yeah, we need psychology actually in this because I think
that I actually think that cars get this absolutely backwards,
because it's incredibly frustrating when you see, like you go
to order an uber a lift and it's like two
minutes and then it ends up being eight minutes because
they basically have lied to you and they know that
right away, and then they're like, oh, it's going to
take you, you know, ten minutes to get here, or this

(12:44):
traffic slow down is a fifteen minute slow down, whatever
it is, it's always optimistic right, And the way you
need to do it psychologically speaking is you want to
do the old underpromise over deliver, right. They should actually
know that this is statistically speaking, you're probably going to
be at the other side of the distribution. So instead,
let me say you know that your uber is coming

(13:06):
in eight minutes, and then when it arrives in four minutes,
you'll be happy instead of frustrating. Let me tell you
that the slowdown is going to be twenty five minutes,
and then when it's over in twenty minutes, you're gonna
be like, oh my god, it was much shorter.

Speaker 3 (13:18):
This is great.

Speaker 1 (13:18):
So I actually think that psychologically they get this totally wrong.
We're talking about self driving cars in your.

Speaker 2 (13:25):
Room at your at Bellagio and I'm gonna, I'm gonna,
I'm gonna order a lift in six minutes, right, and
it's like they're in two minutes. You have to get
on the fucking ride share thing. You don't take these
zubers in Vegas, I guess very much, right.

Speaker 1 (13:39):
No, that's it's yeah, No, that's a different thing. But
I think you just right now, there's no reliable gauge
for how quickly or not a car will actually come.
But I wonder if autonomous vehicles will actually solve this
problem or if they're just gonna lie the exact same way,
right that this is a problem with the underlying algorithm
as opposed to I think anything else. The other thing

(14:02):
that with autonomous vehicle is actually where you might need
a human. So talking about GPS and all of these things,
for whatever reason, where I live in Vegas for a
very long time was for some reason misrepresented on the GPS,
and drivers would actually like go to the totally wrong place,
and it was an absolute nightmare wanting you needed to

(14:24):
get picked up because somehow the GPS just completely fucked up, right,
and so people would just not be able to find it.
And so if there's a human there, you can talk
to them and be like, hey, okay, no, you need
to do this, you need to take a turn here,
you need to do this to get me. What do
I do if it's a weimo that's coming to pick
me up? How do I make sure that when you

(14:46):
need kind of that human intervention because the technology is
fucking up. Like I said, I've never taken weimos, so
I don't know if you can interact with them in
the same way. Maybe this isn't a problem, but it
seems like it might be a point of friction.

Speaker 2 (15:05):
I mean, look, one problem with algorithms, really with algorithms there,
with people who designed the algorithms, right, is like there's
like a lot of over literalness.

Speaker 3 (15:18):
You know what I mean. Like I used to live near.

Speaker 2 (15:21):
A corner near Penn Station, right, and this is a
very busy all times of days intersection, right, But there
are lots of times when if not makes sense to
loot all the way around the whole avenue when you
can just drop me on like one block north or
one block south right, and like so like that logic

(15:44):
that like a human driver, I'm just gonna drop you
twenty ninth and seventh and not loop.

Speaker 1 (15:49):
Around because otherwise it's gonna add five minutes to fifteen
fucking minutes, yeah, fifteen minutes whatever it or like hey, pick.

Speaker 2 (15:55):
I'm gonna pick you up one seventh and not on
on thirtieth. Like that kind of thing has never been
a strength of any of these, right, because you're like
overly literal about like okay, get me the general area,
and like this is you know a lot of the
pickoff and drop off locations can be overly literal at
some properties and things like that.

Speaker 3 (16:13):
I mean, I don't know.

Speaker 1 (16:15):
Yeah, presumably algorithms will get better, and they'll they'll get
better at things like that, but there are other safety
issues that I do wonder about. And by the way,
a few weeks back, there was a big op ed
from a surgeon in the New York Times about autonomous
vehicles and you know, accident rates and not just fatalities,

(16:36):
but the types of injuries that the physician you know
has seen in different situations. And it was arguing for
more autonomous vehicles because you know of how these insane
situations that human drivers get into, and that their safety
record is much better. Obviously this is a tiny fraction,
you know, et cetera, et cetera, et cetera. But that

(16:57):
made me much more kind of pro autonomous vehicles than
I might otherwise have been, because I think that this
is this is a big, a big deal, and we
do want to be thinking about, you know, about those
types of questions. But some of the other kind of
algorithmic questions are you know, how do you and this
is you know, I don't want to get all like

(17:19):
philosophical here, But it does like get into kind of
these morality problems, types of situations where like whose life
do you prioritize? Right? Like how do you make those
types of judgment calls? What is the algorithm going to
do in different situations where there's no good answer right
where you might endanger your passenger or engager someone crossing
the street, or do this or do that, especially as

(17:41):
we started off the show talking about in this transitional
period where you're not dealing with other autonomous vehicles, right,
you're dealing with a lot of humans on the road.
So how do you make those trade offs? How do
you program that? You know, who's responsible for programming those algorithms?
How are they thinking about it? I think these are
all really important questions where it's not a question of
tweaking the algorithm and saying, oh, like let the person

(18:03):
tell them to like pick me up at this corner instead.
These are much deeper questions.

Speaker 2 (18:09):
Yeah, I mean I some of our listeners are probably
familiar with with the trolley problem.

Speaker 1 (18:14):
Yeah. Yeah, the trolley problem is your is your gold
standard here and all the iterations of.

Speaker 2 (18:20):
It, which is a philosophical dilemma. I guess where there
is a trolley on a busy track. It's going to
kill I guess some people who are tied to the track.
You can also divert it and instead of killing these
five people, you kill like two workers.

Speaker 3 (18:36):
I mean, there are there are lots.

Speaker 1 (18:38):
Of infinite variations of it.

Speaker 2 (18:39):
I think it seems pretty us like diverted kill the
two and not the five. Right, But Like the point is,
if you're kind of like intervening and making a decision,
then it begins to feel like I feel like, yeah,
you've made a moral choice, right, And.

Speaker 1 (18:53):
And what if you know that one of these is
an infant one of these is a Nobel Prize winning set, Right,
what if you actually know something about the identities? Like
you said, there are infinite variations of this, of the
trolley problem, and there's no definitive answer. Like philosophers disagree
on the answers to this to this day. There are
different philosophical schools that have that advocate different approaches. And

(19:14):
this is I mean, autonomous vehicles have to deal with
one big trolley problem, right in some sense.

Speaker 2 (19:21):
Yeah, when and when you have to quantify things, it's
one of the things that I will defend some of
the effective altruists and like rationalists about right is Thelle
will be like, well, what's what's the value of a
ship's life compared to like a human life? Yep, And
it can be a little off putting to think about.
On the other hand, we make the trade offs all

(19:43):
the time, right. I talk about an example in my
book from a few years ago where like the entire
New York City, not the entire New York City subway system,
several lines, like the f whatever lines were shut down
because of a loose dog that I had in somewhere
in Brooklyn Heights. I think, right, And like, you know,
I mean, there is a big cost to shutting down

(20:04):
the subway.

Speaker 3 (20:05):
It makes people late.

Speaker 2 (20:06):
It means that can include like emergency workers and things
like that, because often in New York the fastest way
to get around is with the subway, no matter, you know,
unless you have an ambulance with the right of way
and things like that. Right, and people are comfortable talking
about like what is the cost of a lost hour
of economic productivity? And how's that way against a dog's
life and things like that.

Speaker 3 (20:26):
But the point is that like that.

Speaker 2 (20:28):
When you kind of force people to make decisions that
are explicit and if playing their logic then then that
becomes problematic in some ways.

Speaker 1 (20:37):
Right, Yeah, And you know when you are talking about
all of these autonomous vehicles and all of these different companies,
you have to realize. I mean we've talked about this before,
but I think it's a really important point that algorithms
do not exist in a vacuum.

Speaker 3 (20:53):
Right.

Speaker 1 (20:53):
They are based on they're humans who are programming them.
There are people who are waiting different inputs. Those are
all judgment choices. Even when you're looking at you know
AI and like lms, that's also trained on some set
of data. Right, which data is it trained on? Who
created that data? And like those weights really matter, right,

(21:14):
they can make a really big difference. And like all
of a sudden, something's going to go off the rails
because of a tiny, tiny tweak and there's no such
it's there's no such thing as objectivity, and because it's
being made by an algorithm, it's objective and it's perfect.
Like that just doesn't that's that's r I don't know.
I just think that human beings are absolutely I just

(21:35):
mean in algorithms like it's this, it's this myth and
this you know, misc not misconception, because I think a
lot of people agree with me. But it's this just
thinking that, oh, like, because it's like an algorithm and
a computer, it's free of human bias. And that's just
not true. It's not free of human bias. It's not
free of human error. And so you have to think about,
like who are the people in charge, what kinds of

(21:58):
choices are they making, what weights are they putting on?
You know, I'm going to value the life of this
person versus that person, Nate, if I know that the
passenger of a weim is a Democrat or a Republican,
you know, am I going to Am I going to
make different choices based off the value of their life?
That sounds I mean, I'm being kind of silly on

(22:21):
purpose because like, you know, I'm trying to make a
point where like you understand that these are all objective
judgments and who knows right in the black box of
the algorithm how people end up coming up with them,
And and now imagine that, Okay, fine, like we've we've

(22:41):
tried to do the best we can we being the
engineers and the people putting this together. I think we've
got it, you know, down, and then what happens if
there's like a camera malfunction or that something is misperceived
as something else, and you know, car does something horrible
because it thinks that it's about to hit a dog,
but it's really not about to hit a dog, right,
Someone just like left a scooter or something like that.

(23:04):
Those types of things happen as well, and so you
have just whenever you a lot of technology, there are
lots of parts where it can malfunction, it can go wrong.
So last week, please don't judge me, Nate, I rewatched
the movie Speed. It didn't hold up very well. I
have to say I was very disappointed. I'm sorry if

(23:28):
anyone in the audience is a huge fan of Speed.
Just it did not hold up very well. The acting
was horrible. Just everyone was a little bit off. Anyway,
there's a scene this is relevant. There's a scene in
Speed and the conceit of the movie. This is not
a spoiler for anyone who hasn't seen it, because you'll
figure it out within the first ten minutes. Literally, the

(23:48):
only conceit of the movie. There's a bomb on the
bus and the bus cannot go below fifty miles an
hour if it goes below fifty miles an hour, the
bus will blow up with everyone on it. That's the
entire movie. That's the plot. So the bus is going
and they're all in city streets in La, which obviously
is not very conducive to going fifty miles an hour.
By the way night the only implausable point, well, there

(24:11):
are lots of implausible points of that plot, but somehow,
on a highway in La it managed to go faster
than fifty miles an hour. I was like, guys, that's
just never happening. That is never happening. There's way too
much traffic in La. Anyway, we're on surface streets, we're
going over fifty miles an hour, and there's a woman
with a baby carriage and she is going to get hit.

(24:32):
The baby carriage is going to get hit by the bus,
and what we can see as humans, which autonomous vehicle
might not realize, is that there's no baby in the
carriage is filled with bottles. So this is, you know,
someone who was collecting bottles from the street putting them
in a baby carriage. So the carriage goes flying and
the bottles shatter, but no baby is actually hurt. But
I was actually just thinking of this just now when

(24:54):
we're discussing this. You know, what if that happens and
you're in an autonomous vehicle and the autonomous vehicle doesn't realize,
you know, baby carriage doesn't necessarily mean baby. You have
to kind of figure it out and like does something
that will endanger all the people in the car on
the bus because they don't want to hit the baby
because priority says save baby. It's not even a baby, right,

(25:16):
but it's in a baby carriage. So you know, all
of these things. This goes back to what we talk
about often that a lot of the AI can be
and all of these things can be really really smart,
but there are certain things that are second nature to humans.
We don't even realize we're making these judgments that are
really really difficult to replicate. Right, there's something that we
just do unthinkingly, but that it's very hard to program

(25:39):
into an algorithm, into a computer or teach them how
to do that correctly.

Speaker 3 (25:44):
Yeah.

Speaker 2 (25:44):
On the flip side, when algorithms make counterintuitive quote unquote
decisions even when they're correct. And believe me, I'm not
the world's most unskeptical admirer of algorithms. I think there
are lots of bad algorithms and bad models out there.
I've built lots of good models and some bad models myself,

(26:05):
probably over the years and seeing and seeing them from others.

Speaker 3 (26:08):
Right.

Speaker 2 (26:09):
But you know, I was in Florida, like I said,
and my friend was driving me back to my hotel
from dinner. Nice of him, but we'd had a little
wine at dinner, and he's like, I have a test.
I'm just gonna turn automated mode on and not touch
the steering wheel, and like it did pretty well, but
like it took a route that he would never have

(26:31):
taken back to the hotel, right, And like he's.

Speaker 3 (26:33):
Like, oh, actually this makes sense.

Speaker 2 (26:34):
I understand why it would do it, and so and so,
like you know, and that's a very low stakes decision, right,
But like you know, often with these directions, it's like, Okay,
if people have this intuition that you want to take
the most direct route, whereas oftentimes going out of your
way is in principle faster, you could probably test that
robustly or not or things like that, right, or maybe

(26:56):
there's a safety thing that seemed like it's dangerous, Like,
but if you deviate.

Speaker 3 (27:00):
From like the norm.

Speaker 2 (27:01):
Yea, then then you get punished for it, right, and
so like it might be that like, obviously these vehicle
should be regular. It might be that regulations that require
more like explicit and transparent thinking, yeah, are actually less safe.
I want to be very careful with that. I don't
think that's the case for like really large language models,

(27:24):
for example. I think we need far more transparency with those.
I think they fail in ways that.

Speaker 3 (27:28):
Are a little unpredictable.

Speaker 2 (27:31):
But the problem is, like, yeah, people will blame these
cars even if they have like a you know whatever
twenty or twenty five percent the accident rates that humans do,
and and and plus you have vested interest, right. You know,
in New York it's a city with a lot of
union activity. You know, New York also has these you know,

(27:54):
amazingly expensive frankly ubers and lifts and things like that.
It's amazing when you go to like a city like
like in Florida and you're like, oh, man, eight bucks,
you get across town. That's amazing, right, Not in the
case in Manhattan and Brooklyn.

Speaker 1 (28:06):
No, certainly not.

Speaker 3 (28:14):
And we'll be right back after this break.

Speaker 1 (28:26):
Even simple algorithms like you know, what's the best route
to take. I've been completely screwed over in recent months
by Google, which will default instead of the fastest it
will go to the eco friendly way to drive. And
I'm like, how are you even defining eco friendly? Like

(28:47):
are you is it because I'm saving gas? Because I'm
not breaking as much? Am I saving the environment? What
exactly am I doing? And first of all, I didn't
say I wanted to do this. Secondly, I don't know
what you mean by eco friendly, Like third of all,
just tell me that I'm not going the fastest way
instead of defaulting to it. So I think that in
that sense, we want to be explicit in the algorithms.

(29:07):
And I think you need to make think there. Does
I see a future where like there's human input into
it where you're like, I want to take the way mo,
and I like this is my priority, right, like get
me here fastest or I don't care. I want to
see neck drive like whatever it is, Like that would
be that would be an interesting way of doing it,

(29:27):
as long as you know the model doesn't lie to
you and it's not like, oh, this is the fastest way,
right because I'm programmed to take these other things into consideration,
but I don't want to piss you off.

Speaker 2 (29:40):
Yeah, look, I mean part of the question is like
what is the algorithm solving for? Right, So, like I
now live in the East Village, which is very dependent
on a couple of crosstown subway lines, like the L,
and so it's actually pretty good overall for subway service,
but you're almost always making transfers to get most places,

(30:04):
and like you know, there is a lot of redundancy
typically in which transfer or you make.

Speaker 3 (30:10):
And like I don't think.

Speaker 2 (30:11):
The Google subway directions are often very good in terms
of matching my intuitions about like where's the right place
to make it? Yeah, a transfer, right, Like you know,
just the kind of combination of the fact that, like
some lines are more reliable than others, some some actual transitions, right,
Like you know Union Square, if you ever in a

(30:33):
Union Square in Manhattan, all those you know, it's very
crowd of those platforms, right, And like there are some
other stops where just like you know, I don't want
to have to change there if there's another alternative, and
like I don't think it quite understands that adequately, And
like the fuzziness around, like I don't actually have to
get this exact location, just somewhere kind of like near there, right,

(30:54):
I don't think it's fantastic, but like, what are are
you trying to sell it for?

Speaker 3 (30:56):
Like time or like ease of use?

Speaker 2 (30:59):
You know, in theory, it's like, Okay, you can get
there two minutes faster if you transfer twice instead of once. Okay,
Well a I don't really buy I think there's too
much risk in each transfer, right, that something's wrong, right,
take a wrong turn.

Speaker 1 (31:11):
Right.

Speaker 3 (31:12):
Depends on a lot of things.

Speaker 2 (31:14):
But yeah, if you don't know what the algorithm is
solving for exactly, then it's not garbage in garbage out,
but it's inherently inadequate to the problem unless you're kind
of like and probably implicitly it's like some notion of
like utility or satisfaction or whatever else.

Speaker 1 (31:30):
Yeah, No, I mean I think that and a subway
problem is much simpler than all the different things that
a self driving car is going to need to solve
for on any given basis and at any given point
in time. So I think that these are just illustrations
of a lot of the things, a lot of the
challenges that remain a lot of the value judgments that
have to be made as this technology gets off the ground. Now,

(31:54):
I'm also worried, Nate, Let's just imagine, for the sake
of argument, that everything, like all these algorithms are evolved perfectly,
right that, And like I said, for the sake of argument,
let's just say that like they've solved the trolley problem
and they are optimized, like they're doing everything that we

(32:16):
think they should be doing, and like the technology is
now like beautiful and the cars can be driving beautifully.
There's one other element that I worry about, which is
security of the actual systems, in terms of hacking and
in terms of bad actors being able to get into
these autonomous vehicles, take them over, sabotage them, whatever it is.

(32:39):
This is not me being conspiracy theory minded. Basically every
single smart car system has been hacked into at some point,
including things that seem ridiculous, and not just smart cars,
smart homes. Right, Like, there are people who like hack
into homes and like turn your heat to one hundred
or make it freezing, and do things that seem like

(32:59):
silly pranks, but they're not right, Like they're doing this
with a very malicious intent, Like people's toasters, Like anything
that's smart that's connected to the grid, like has been
actually taken over at some point, whether to prove a
point or sometimes for a malicious purpose. And cars, like
if you think about an autonomous system that can be

(33:20):
taken over and whether it's cars or something else, like
this isn't the I mean, it could be the plot
of a movie, but like this is something that's actually
not It's quite reasonable to assume that there are bad
actors who will try to break into any system, just
like energy system, powagrids, et cetera. Like that's that's something

(33:41):
that has happened. It will continue to happen, and people,
you know, in their quest to kind of evolve and
move forward, safety in that sense is not often the
number one concern, and so they're end up being you know,
there end up being vulnerabilities that are only discovered after
the fact. And if you think about what might happen,

(34:03):
like it's a constant arms race, you know, between bad
actor hackers and people trying to you know, the white
hackers like good actors who are trying to prevent vulnerabilities,
prevent security technology breaches, and it's this constant war and
if you think about now a future where you know,
we have lots of fleets of autonomous vehicles, I think

(34:24):
that's something that we do need to think about and
worry about as well. And like I said, like I
don't think that this is being conspiracy theorist or just
are very alarmist. I actually think that this is quite real.
It happens all the time, you know, people things are
hacked all the time, and as you know, as we
are more and more reliant on it, you realize just

(34:46):
how big of an impact hacks can have. They If
you remember, like six months ago, Las Vegas kind of
came the strip came to a standstill when someone hacked
into all the MGM systems. Right, no one could check
in to their rooms. People couldn't even get into their
rooms like people could. It was just absolutely a shit
shown that happened to the Caesar's properties. Turns out, by

(35:07):
the way, I was a teenager, but like it was
just an absolute nightmare. And that's just like that's a
tiny thing.

Speaker 2 (35:16):
Yeah, not fully automating things with AI, because when AI
systems fail, they don't fail particularly robustly necessarily.

Speaker 3 (35:27):
I mean, there's also just some shit like.

Speaker 2 (35:30):
You know, probably if humans are picking out driving routes
like from JFK back to Brooklyn or Manhattan, right, probably
you want some degree your randomization. You know, you probably
have a mixed strategy in game theory terms, where like
if everybody takes the most optimal route, then it's still
longer the most optmal route because it gets more crowded.

Speaker 3 (35:48):
Right, you know, I don't know if there's some of that.
I was just in.

Speaker 2 (35:51):
Miami estate that can have horrible traffic problems, so they're
also very like patchy. I think it's more an infrastructure
matter than like a matter of AI decision making per se. Right,
But like, but yeah, no, look anything where there's not
like a kill switch, get I get nervous about, yeah,
or a redundant backup.

Speaker 1 (36:12):
And oh, by the way, it doesn't it doesn't even
have to be hacking. By the way, I was just
thinking about the AWS outage that recently happened, where people's
smart beds got affected and like people got woken up
in the middle of the night because the beds started
doing something really weird and then got caught in these
ridiculous positions and there was nothing they could do because

(36:33):
there was no redundancy, and like the entire system had
gone off.

Speaker 2 (36:36):
It's a smart mattress or smart toothbrush or smart clipper.

Speaker 1 (36:40):
But exactly no, neither do I. But now imagine if
you have to be in a smart car, right, because
autonomous vehicles are the thing. That's the thing that I
actually like that. I think that we don't worry enough about, right,
Like what happens if there are these outages?

Speaker 3 (36:53):
Well, and we have these coercive like if you go
I've been in where was I in? Norway? Right?

Speaker 2 (36:59):
When ever the car goes over this feed limit, you
get like a little same thing in Korea, right, So
like so that can be a.

Speaker 3 (37:13):
Little bit coercive.

Speaker 2 (37:15):
I don't know, I know how I feel about that, right, Yeah,
but you're certainly giving like the government a lot of control.

Speaker 1 (37:20):
Over, Yeah, you are, and you're giving up and you're
also giving up a ton of privacy because you know,
you're presumably you are associated with you know, all the
routes you take, everything you do, all your habits. So
I think there are these downstream concerns that we haven't
really reached yet because we're still figuring out by we,

(37:40):
I mean the companies who are designing autonomous cars.

Speaker 3 (37:43):
I don't mean me.

Speaker 1 (37:45):
Personally, but I think we're still figuring out like the
algorithms and all this stuff. But like, even downstream, there
are lots of things to be thinking about, safety, security,
redundancy of you know, all of this technology, privacy concerns,
control like when when people can actually like override, who
can override your car? Who can make those types of decisions.

(38:07):
I think that these are really important questions. By the way,
there was a huge you know, there have been huge
outcries in the last few years where all of a sudden,
features of cars that you paid for stopped working because
the company decided they wanted you to pay for them,
like heated seats. Right. Like now, imagine you buy a
car and you don't have a choice anymore to buy

(38:29):
something where it's not smart and you can control everything,
and you're just giving all of that up, and all
of a sudden, they're like, oh, it's now a subscription model. Sorry, Nate,
you can't go to where you're going today because you
haven't paid for the upgrades to this feature. I am
going to lock your car down. So there are all
of these different considerations that I don't think you know,
we've only touched the tip of the iceberg when it

(38:51):
comes to this. Let us know what you think of
the show. Reach out to us at Risky Business at
pushkin dot Fm. Risky Business is hosted by me Maria Kanakova.

Speaker 3 (39:12):
And by me Nate Silver.

Speaker 2 (39:14):
The show was a cool production of Pushing Industries and iHeartMedia.
This episode was produced by Isaac Carter. Our associate producer
is Sonya gerwit Lydia, Jean Kott and Daphne Chen are
our editors, and our executive producer is Jacob Goldstein. Mixing
by Sarah Bruger.

Speaker 1 (39:30):
If you like the show, please rate and review us
so other people can find us too, But once again,
only if you like us. We don't want those bad
reviews out there. Thanks for tuning in.
Advertise With Us

Hosts And Creators

Maria Konnikova

Maria Konnikova

Nate Silver

Nate Silver

Popular Podcasts

Stuff You Should Know

Stuff You Should Know

If you've ever wanted to know about champagne, satanism, the Stonewall Uprising, chaos theory, LSD, El Nino, true crime and Rosa Parks, then look no further. Josh and Chuck have you covered.

The Bobby Bones Show

The Bobby Bones Show

Listen to 'The Bobby Bones Show' by downloading the daily full replay.

Music, radio and podcasts, all free. Listen online or download the iHeart App.

Connect

© 2026 iHeartMedia, Inc.