Episode Transcript
Available transcripts are automatically generated. Complete accuracy is not guaranteed.
Speaker 1 (00:04):
Welcome to tech Stuff, a production of I Heart Radios,
How Stuff Works. Hey there, and welcome to tech Stuff.
I'm your host, Jonathan Strickling. I'm an executive producer with
How Stuff Works and my Heart Radio and I love
all things tech. I always have to qualify that after
I say it, because then I end up covering topics
(00:25):
like this one today where I don't love all things tech.
I guess it's it's being a little a little disingenuous
to make that claim. But recently, critical thinking has really
been on my mind a lot. I always want to
be a critical thinker, though like most humans, I do
have lapses. Sometimes I encounter a message that is so
(00:49):
appealing that my desire for it to be true can
override my skepticism, and I'll fail to ask myself or
anyone else for that matter, important questions to make sure
that what is being promised is in fact realistic and achievable.
My goal is to minimize the number of times I
go along with a pitch simply because it was a
(01:09):
really good pitch. But technology sometimes makes that really hard.
And I want to talk about this challenge because it's
one that we all encounter. Technology is undeniably amazing. Just
think about how much humans have achieved in a very
short amount of time. For most of human history, our
technological advancement was really slow. We completed some monumental achievements
(01:33):
in art and architecture, and sadly, in finding new ways
to kill each other, but apart from some early experiments
and steam power, and a few interesting ideas from geniuses
like Leonardo da Vinci, we really didn't see incredibly rapid advances.
Then we get to the nineteenth century, when a combination
of factors led to the Industrial Revolution. That revolution increased
(01:56):
productivity and lead to conditions that made it possible for
more innovators to experiment and expand our knowledge, understanding, and
ability to exploit the world around us. Then we get
to the twentieth century and the development of computers and
the transistor, manaturization, mass produced plastic, and some other important
innovations that would allow for truly rapid technological evolution. Consider this,
(02:22):
When I was a kid, there was no public internet,
cell phones were pretty much restricted to R and D labs.
Personal computers had just entered the market. Many of today's
fastest growing companies couldn't have existed because the business they
are based off of didn't have a platform yet. For
a while, it remained possible for the average person to
(02:43):
know enough about the technology they encountered to deal with
it when things went wrong, at least for most of
the technology. Some like television sets and stuff, we're already
well beyond the understanding of the average person. But this
actually got harder to do as technology advanced, and we've
seen it manifest in many ways. A good example of
this is in the automotive industry. Classic cars can be complicated,
(03:08):
but with some training and practice and owner can learn
how to do maintenance and repairs on a lot of
different parts of the car by themselves. There's a learning
curve there, but it's totally possible, and there are a
lot of people who love to take old cars and
restore them as sort of a passion project. Today cars
tend to have components in them that are high tech
(03:30):
and sealed away in such a way as to make
it difficult or impossible to access without proprietary tools and
a deep understanding of how they work. You might not
be able to tell what's wrong with a car without
a special diagnosis scanning tool. And after learning what's wrong,
you might not be able to do anything about it yourself.
As cars get more advanced with features like various sensors
(03:53):
and systems for stuff like lane assist, adaptive cruise control,
parking assist, and more, they become harder to maintained by
the owner. They're turning into what folks call a black box,
in which you have a type of technology where the
inner workings are hidden away from the user. It doesn't
literally have to be hidden from sight, it can just
(04:15):
be so complicated that the average person finds it inaccessible.
And this leads us to a real challenge. As we
have learned more about the universe, we've specialized in our
focus we had to. We quickly reached a point in
which it's pretty much impossible to have a deep understanding
of all subjects. Then of our understanding has grown, we've
(04:36):
pushed back the boundaries of ignorance. But now there's just
too much to know for any one person to know
it all. And the stuff we've built has capitalized on
this focused understanding, but it also means that it's created
a barrier for us. We might know that something works,
but we don't necessarily understand how it works because it's
(04:58):
based off a principle that's alien to us, and this
was bound to happen, but it creates a dangerous situation.
It's dangerous for a few different reasons. First, we as
consumers can grow complacent. We expect stuff to work, and
when that stuff doesn't work, we're frustrated. Worse, because we
probably don't understand how that stuff works, we don't really
(05:22):
know how to go about fixing it. On a positive note,
that usually means there's opportunity for people with expertise to
make a living as a troubleshooter or repair professional, but
it means that as consumers we have less ability to
work with the stuff we actually consume. Second, our technology
is approaching the point where it could be really dangerous
(05:42):
if we don't understand exactly how it's working. As machine
learning models and artificial intelligence become more sophisticated, it becomes
more important for us to understand how these systems are
coming to conclusions. It's pretty cool to say, hey, I
built this machine learning model and I train AI on
how to recognize a person's face. Maybe you built the
(06:03):
model to work with a camera manufacturer so that the
cameras they make automatically detective face and then focus on it.
But that same technology could potentially be used in tracking
and identification systems, and if that system was being used
by say, law enforcement, you want to understand exactly how
the system is identifying people so that you can audit
(06:24):
it and make sure that it's being accurate and not
having a lot of false flags. Otherwise you run the
risk of having the machine mistakenly identify people during an investigation,
which at the very least could be disruptive. Or to
go back to cars for a second, consider driverless cars. Now.
I am still optimistic about driverless cars, but I've tempered
(06:47):
my expectations on when we might see them. In my mind,
I was thinking, well, a car decked out with sensors
and a really fast computer system would be able to
detect potential problems and react to them far faster and
more logically than a human. I even thought a computer
system could be able to monitor completely around a vehicle,
(07:08):
whereas a human is typically focused on whatever is directly
in front of him or her, or perhaps in a mirror,
but can't pay attention to all directions at the same time.
And sure machines can react in a fraction of the
time it takes humans to do it. But machines are
really good at handling routine situations and then responding appropriately.
(07:28):
The more unusual an event, the harder it is for
the machine to cope with it. Machines typically aren't terribly adaptive,
and so with many millions of cars on the road,
plus bicyclists, pedestrians, animals, debris, weather events, and other factors,
it's pretty rare for any drive of a significant length
(07:50):
to be completely quote unquote normal. So we need to
design autonomous cars that can adapt to situations. But that
also means we need to understand what decisions a car
will make, or the very least, determine why a car
behaved in one way versus an alternative. And so there's
a move in AI and artificial neural network circles to
(08:13):
make these processes as transparent as possible so that we're
not caught off guard when a machine takes a particular action.
And third, advanced technology has given us unrealistic expectations of
exactly what was possible. After all, two hundred years ago,
we wouldn't really dream of going up into space. A
(08:35):
century ago, we might dream of it, but we still
had no real understanding of how we would accomplish it.
Then within another five decades we were sending people up
to space, then to the moon, and now we have
private companies designing launch vehicles that can return to Earth
to be refurbished and used again in future launches. That's
(08:56):
pretty incredible. We also have seen technology go from enormous
to the very tiny. In the nineteen forties, a computer
took up a lot of space, maybe the entire floor
of a building, and it's processing power would be a
fraction of what you'd find in the average smartphone. These days,
managorization and Moore's Law have conditioned us to think that
(09:17):
technology is capable of pretty much anything. I mean, it
has to be. We wouldn't have assumed that it would
be easy to get your hands on a portable computer,
capable of acting like a camera, a communications device, and
a direct link to the world's largest repository of human knowledge,
even if most of that knowledge seems to be centered
on cats. But that means when someone comes forward with
(09:40):
extraordinary claims, it's easier for us to take them at
face value. Technology has created an environment in which what
was impossible yesterday becomes a mundane everyday task tomorrow, and
this means that people can leverage that to our disadvantage.
In some cases, you might be with an outright snake
(10:01):
oil salesman type, someone who knows very well that the
dream they're peddling isn't based in reality. But in other
cases you might have sincere people who truly believe they've
either cracked the code on something that was previously thought
impossible or they feel they're right on the cusp and
if they can just get enough funding to cover costs,
(10:23):
they'll get the rest of the way there. Now. In
a way, this could be a good thing, as it
means that innovators have more access to resources than ever before,
and it could lead to great discoveries. But in other
cases it can lead to frustration, financial hardship, and worse.
When we come back after this quick break, I'll give
an example that's in everyone's minds right now. There's probably
(10:52):
no better company to point to when you're talking about
the dangers of wishful thinking than Theopness. And I know
it's been in the news a lot. You've probably heard
tons about it, maybe you've seen the documentary about it,
maybe you've seen the the various articles or listen to
the podcasts about it. But we're going to talk about
(11:13):
a little bit more in this context and just to
give you an overview in case you haven't encountered this.
A woman named Elizabeth Holmes founded the company after she
dropped out of Stanford. I do not know her. I
do not know whether or not she sincerely believed or
believes that the text she was seeking to invent could
(11:33):
really work. But I do know that as of right now,
it isn't working, and that's a big problem for those
of you who are unfamiliar with the story. Let's give
a quick summary. Elizabeth Holmes wanted to create tech that
could disrupt the health care industry. It would, in theory,
give more control an agency to consumers who could learn
(11:55):
much more about their health on their own without the
need to make an appointment with a doctor and undergo
numerous blood draws to have various blood tests performed. The
basic idea was that Sara no Nos would develop a
device capable of taking a very small sample of blood,
small enough to be drawn from the tip of a finger.
(12:16):
They would then run a battery of tests to look
for indicators of different conditions and diseases within a relatively
short time. It would produce the results, giving the user
more information about their health, which in theory, would help
that person have meaningful conversations with a physician if there
were any markers that raised concern. And it's a very powerful,
(12:37):
very compelling idea. There are technologies like labs on a
chip and microprocessors designed to detect the presence of certain
markers that indicate the presence of illness, but this would
wrap all of that up into a single package. In fact,
Homes worked on a project related to this. After her
first year at Stanford. She joined the Genome Institute in
(12:59):
single Or and was overseeing blood tests. She first envisioned
a sort of arm band that would use micro needles,
and those micro needles would both draw small blood samples
and would also administer medication on an as needed basis.
She later led a team to work on a machine
that would accept a small capsule containing a blood sample
(13:22):
an attempt to run multiple tests on that sample. This
was in stark contrast with the normal medical procedure in
which a doctor or nurse would draw numerous vials of
blood for testing send those samples off to one of
two major lab testing companies in the United States. If
Sarinos's technology worked, the company could totally up end that
(13:45):
system in the US. Patients could go to a clinic
to have a test run and get the results back
in hours, rather than having to take a trip to
the doctor's office, sit for a blood draw, and wait
several days. They could end up being cheaper than the
old approach. Sarainos executives like Homes stressed that this would
put patients in control of their own health information and
(14:07):
could provide many benefits, such as a heads up for
possible problems in the future or catching something early enough
to treat it before it became too severe. But the
problem is that this depended upon that if Thara Noss
technology worked. Thing. As it turned out, the tech wasn't working.
At least, it wasn't working at the level of the
(14:28):
company was striving for the engineers that thereas were trying
really hard to create a diagnostic device that could take
that small blood sample and run it through a lot
of tests. But as it turns out, that's actually incredibly complicated.
Using such a small sample was already a huge challenge.
On top of that, you have issues you have to
(14:50):
worry about, like contamination. A contaminated sample could give off
false positives, creating a situation in which a patient believes
they might have a particular disease or a condition when
that isn't really the case, or it could mask something
that the patient would need to know about, but because
the results would be inconclusive, they wouldn't know about it. Now.
(15:12):
Perhaps the hope was that the company would be able
to develop the technology rapidly with the help of large
investments in the company. And sure enough, there were a
lot of folks with deep pockets who poured money into thoroughness,
and you can sort of understand why. If it all worked,
it would be a revolution in medicine. The company would
stand to gain billions of dollars, The cause appeared to
(15:36):
be noble, and the outcome looked like it would be
incredibly profitable. It was a very tempting opportunity, and many
didn't resist that temptation. On top of all that, it
was dependent upon technology, and as I mentioned, we've come
to a point where we believe technology can do just
about anything. So it didn't seem outside the realm of
possibility that a microchip inside a Suffici CICAD machine would
(16:01):
be able to run a series of tests on the
small blood sample and come up with meaningful results. But
flash forward a few years and a bombshell of an
article revealing that there was really a shell game going
on at Sarais tells us that now we know the
technology was a failure, that THEOS was depending upon the
same sort of machines that the company was purporting to
(16:24):
replace with its innovative approach, that hundreds or thousands of
patients in trial locations were potentially at risk due to
unreliable results, and that the company used some pretty draconian
tactics to keep employees in line and prevent them speaking
out about what was going on. It's about as bad
an outcome as you could imagine. What's more, there are
(16:46):
those who say that even if everything had worked, the
whole enterprise was misguided in the first place. A Piece
and Wired by Gnoam Cohen cites a couple of those people.
The piece has the title the other Big lesson we
should learn from thoroughness. Cohen mentions Faye Flam, who wrote
a piece in Bloomberg that argued thoroness was tapping into
(17:08):
another deep human desire, the illusion of controlling our own
destiny through thous We could end up getting our own
test results and then apply our own interpretations to them.
Perhaps we would interpret them in a way that is
most comforting to us, or one that seems to align
with our preconceived ideas about our health. This isn't exactly
(17:29):
the best way to handle a medical issue. Surely it
makes more sense to have a trained medical professional provide
an unbiased, objective interpretation of test results that gives you
the best chance to take appropriate actions that will help
you lead a better, healthier life. So yeah, it was
a really compelling sales pitch, no wonder so many people
(17:52):
were on board. If it worked, it would be cheaper,
less painful, more convenient, and at least on the surface,
more empowering than the established method. It was the sort
of thing we'd want to believe in, so people did.
I think autonomous cars are following a similar trajectory. Now,
to be clear, I feel that a lot of great
(18:12):
work has been done in autonomous cars. They are much
further along than a mythical blood testing device that only
ever got approval for performing one type of blood test
when it was supposed to be able to run more
than one hundred of them, but we still have a
long way to go. Unfortunately, because of our experiences dealing
with truly amazing tech and the expectation that, of course,
(18:36):
technology can take care of the problem, we've had some
high profile accidents that prove this isn't the most reliable philosophy. Again,
it would be understandable to put a lot of faith
in the tech. Google's self driving cars, which have been
pioneers in the field, famously operated at first in secret
(18:56):
and then openly for hundreds of driving miles without a
single accident, or at least that was the official story.
There have been a few accidents, most of which were
likely the fault of a human driver, either the safety
operator in the autonomous vehicle or the driver of another car,
but later reports suggested that there were some serious accidents,
(19:18):
at least a few of which were caused by the
autonomous driving system behaving in an unexpected way. The company
kept these accidents quiet, and so there was an unearned
expectation of safety with this tech. Then enter Tesla and
the autopilot feature in the company's electric vehicles. While the
company issued a statement that made it clear that this
(19:40):
feature wasn't supposed to replace a human driver. That didn't
stop people from trying it out that way. Most of
those people didn't have any problems, but in at least
a couple of cases, drivers using autopilot ended up in
tragic situations. One of those was the case of Joshua
Brown in two thousand sixteen, his s Tesla crashed into
(20:01):
a semi truck. Brown had been using the autopilot feature.
According to the vehicle's datalogues. Out of the thirty seven
minutes Brown had the autopilot feature turned on, his hands
were on the wheel for just twenty five seconds total,
in direct violation of the policy that Tesla had set.
The company stressed the autopilot isn't meant to replace human
(20:24):
drivers and that the cars driver should have had their
hands on the wheel at all times. The second fatal
incident happened in March two thousand eighteen, when way Walter
Huang's Model X veered into a highway safety barrier. Recently,
his family sued Tesla, alleging the company was aware of
the dangers of the features, that Huang had been operating
(20:45):
the vehicle within the parameters of autopilot, and that Tesla
had been using drivers like Wong to beta test changes
to the feature in the wild. That suit is just
getting started as I record this episode, and I don't
mean to pick on Tesla, after all. I started this
by talking about how Google kept several accidents on the
q T. One of Uber's self driving cars in its
(21:07):
beta test program in Arizona, struck and killed a pedestrian
in March two thousand eighteen. State prosecutors decided that Uber
isn't liable for the accident, but that the safety operator
who was in the car might bear some responsibility for
failing to act before the accident. According to venture Beat,
the operator was streaming a video of the voice and
watching that rather than the road. It's quite possible that
(21:31):
companies pushing autonomous car technology are doing their best to
keep incidents quiet in an effort to avoid government regulations
and interference which could threaten the profitability of such a pursuit.
But at the same time, these high profile incidents have
dealt a blow to consumer confidence about the technology in general,
and they really reinforce that driving is more complicated than
(21:52):
just staying within your lane and breaking if something is
in the way. Before I wrap up this section, I
do want to also mention that, at least according to Tesla,
the autopilot feature has proven to be safer than human
drivers operating vehicles unassistant. According to Tesla's report, there was
one accident per two point eight seven million miles driven
(22:16):
where autopilot was engaged and one accident per one point
seven six million miles driven when it wasn't, and that
according to government statistics, the average is an automobile crash
every four hundred thirty six thousand miles. However, skeptical researchers
have found that Tesla hasn't always been honest or at
(22:36):
the very least correct about safety reports. A firm called
Quality Control Systems sued the United States National Highway Traffic
Safety Administration, or in ht s A, in order to
get the data the agency said had proved that Tesla's
autopilot had cut back on crashes by fort In fact,
(22:58):
QCs found at in cases in which all the data
was available, which was just a fraction of the cases
the nh t s A had used to come up
with that mark, the auto steer feature on autopilot actually
increased crash rates by So what does all this mean.
I'll get back to that in a second, but first
(23:20):
let's take a quick break. So am I saying you
shouldn't believe anyone, Well, that's not quite it. I think
a better thing to say is don't accept anything at
face value. Sometimes people can just be wrong about stuff.
(23:43):
It happens to me more often than I care to admit.
But the responsible thing to do in those cases is
to own up to the mistake and correct it where
you can. I am certain that at least some people
who thought they were onto some sort of free energy device,
for example, really did. They were onto something. Others might
have suspected that what they were pursuing was impossible, but
(24:05):
they had already invested too much to back out at
any rate. Whether it's a perpetual motion machine or an
over unity engine, the simple fact is that these devices
have never been proven to actually work as described supporters say.
This is because there are powerful entities like petroleum companies
that will use every means to keep such devices from
(24:28):
being deployed. But at the level of classical physics, such
a device would have to defy laws of physics that
have stood the test of time. Now does that mean
that such a device is impossible. No, it's not impossible,
but it does mean you need truly extraordinary, irrefutable proof
that it worked. In other cases, people are being outright
(24:52):
dishonest in an effort to advance their own agendas. They
might take efforts to hide any deficiencies in the technology,
or to overly elevate stuff that's working to make it
seem more important than it is. They might just be
stalling for time in the hopes that a breakthrough is
right around the corner and they can reap the benefits
once it all pans out. Now, we're going to see
(25:14):
technology continue to advance and evolve. In most cases, we'll
see it do so gradually, perhaps so gradually that we
don't really appreciate how incredible that technology can be. I
have owned smartphones for about ten years or so, and
now I take it for granted that I have access
to them. But as a kid, it would have totally
(25:35):
floored me to know that such a thing would be
possible in my lifetime, let alone that I would actually
own one. When confronted with claims about technology, it's good
to ask questions questions like how is this possible? How
does it work. What is it doing differently from earlier
versions of this tech. If it's a technology that relates
(25:59):
to a specialized field, it might be necessary to consult
experts in that field to get good answers. There's no
shame in that back. If someone presented a technology to
me with claims that the whole thing worked on quantum principles,
I need to consult with an expert. I have the
most basic understanding of high level quantum physics, and once
(26:21):
you get past that, it's all beyond me. I might
suspect something fishy, but I'd have no way of knowing
on my own anyway. If my suspicions were warranted, I
would need to consult with someone far more educated and
experience than I in the world of quantum physics to
get a better handle on it. Now, the more vague
the claims, the more skepticism you should apply. If the
(26:44):
claim includes disconnected scientific terminology, particularly if it is getting
into fields like quantum physics, that's a red flag. You
need to pay closer attention to those claims, or might
even include non scientific or meaningless language, which is another
big warning sign. Maybe you'll see a device that claims
that if you wear it. The device will boost your
(27:05):
quote unquote energy in some way, but what does that
actually mean? Terms must first be defined, and then you
can move on to the next question, which is, well,
how the heck does it do this thing you claim
it's doing. For gadgets or technologies that cite experts, it's
good to find out who those experts are. If there's
language like studies show that blah blah blah blah blah,
(27:27):
it's good to find out who did the actual study.
Was it a reputable third party that could provide an objective,
unbiased point of view, or was it an in house
team or biased party. There's lending credence to claims without
actually finding out if those claims have merit. Moreover, we
have to remember that tech isn't magic, though science fiction
(27:50):
author Arthur C. Clark did once observe that any sufficiently
advanced technology would seem to be magic to us. Technology
has limits. There are fundamental physical limits that tech can't breakthrough.
And just because we see tech doing some stuff really
well doesn't mean it can do everything equally well. I
(28:12):
know I go on about critical thinking a lot in
this show, but the reason I do that is I
want people to apply that skill set in their lives
to make better informed decisions. I want you, guys to
avoid pitfalls, whether they are purposefully placed in your path
or not. I want you to be able to spot
a mistake or a scam. I want you to follow
(28:34):
your suspicions when you feel something isn't on the up
and up. And along with that, I do urge the
use of compassion. Please keep in mind that not everyone
hawking tech that promises too much is doing so out
of malice or agreed. Some could be genuinely misled by
what the tech can do, and so it's a good
(28:55):
idea to have critical thinking and compassion go hand in hand.
Try to understand not just how realistic the claim is,
but the person making the claim. If they're intentionally trying
to mislead people and take advantage of them, well they're
kind of scummy and I feel they should be called
out on that behavior. But maybe they're just believing in
(29:18):
something they want to believe in because of the promise
it makes. That doesn't necessarily make them bad. They might
mean they are gullible, or that they are in a
situation that they desperately want out of, and the promise
seems to suggest an escape route. So long story short,
don't believe all the hype. Ask questions, ask for clarification
(29:41):
when you get answers, to make sure that those answers
are actually substantive and they mean something. Be prepared to
dismiss a claim if the support for that claim is lacking.
Also be prepared to accept a claim if the support
merits it. One of the biggest complaints about skeptics is
that they are seen as people who dismiss claims out
(30:02):
of hand, and for some people that is true, although
we typically call them deniers rather than skeptics. But most
of us try to keep in mind that if extraordinary
proof for a claim exists, we should be willing to
adjust our world view to incorporate this new idea, even
if it previously seemed impossible. The proof just has to
(30:25):
be there, And don't just assume everyone is out to
pull one over on you. Just be aware there are
those people out there too. In short, be good human beings,
and keep in mind again technology as it advances, we're
going to keep running into this problem. Because we see
it do amazing things in one arena, we might expect
(30:48):
it can do equally amazing things in another and that's
not always the case. Well, that's it for this soapbox
edition of tech Stuff and my regular or call for
critical thinking. I think it's particularly important to consider it
now in the wake of things like Thorinus and Facebook
(31:10):
and all of its controversies and related technological issues. And
of course you can and should use critical thinking well
outside the world of technology. You should apply it pretty
much everywhere in your life so that you can be
reasonably sure you're getting the real deal and not being misled.
If you guys have suggestions for a future episode of
(31:32):
tech Stuff, you can contact me. The email adjust for
the show is tech Stuff at how stuff Works dot com,
or you can drop me on line on Facebook or Twitter.
The handle for both of those is tech Stuff hs W.
You can head on over to our website that's tech
stuff podcast dot com as an archive of all of
our previous episodes plus links to our background on the show,
(31:55):
as well as to our our online store, where every
purchase you make goes to help the show. We greatly
a pre ate it, and I will talk to you
again really soon. Yeah. Tech Stuff is a production of
I Heart Radio's How Stuff Works For more podcasts from
my heart Radio, visit the i heart Radio app, Apple podcasts,
(32:17):
or wherever you listen to your favorite shows.