All Episodes

September 2, 2020 49 mins

In tech, a black box is any technology that hides the processes that take input and generate output. We see what goes in, we see what comes out but what's going on inside? We look at the black box problem in tech.

Learn more about your ad-choices at https://www.iheartpodcastnetwork.com

See omnystudio.com/listener for privacy information.

Mark as Played
Transcript

Episode Transcript

Available transcripts are automatically generated. Complete accuracy is not guaranteed.
Speaker 1 (00:04):
Welcome to Text Stuff, a production from my Heart Radio.
Hey there, and welcome to tech Stuff. I'm your host,
Jonathan Strickland. I'm an executive producer with I Heart Radio,
and I love all things tech and today I thought
it was a good time to take another opportunity to

(00:26):
chat about one of the subjects. I really hammer her
home in this series, and I don't make any apologies
for this. It is not about convergence though long time
listeners of tech Stuff know that for years that was
my favorite trend to cover. It's about critical thinking. So yeah,

(00:46):
this is another critical thinking in Technology episode. Now. In
these episodes, I explain how taking time and really thinking
through things is important so that we make the most
informed decisions we can and so that we aren't either
fooling ourselves or you know, allowing someone else to fool
us when it comes to technology. Though, I'll tell you

(01:09):
a secret, using these skills in all parts of your
life is a great idea because it can be really
easy for us to fall into patterns where we let
ourselves believe things just because it's convenient or you know,
it reaffirms our biases are prejudices, that kind of thing.
So if you use critical thinking beyond the realm of technology.

(01:31):
I ain't gonna be mad. Specifically, I wanted to talk
about a general category of issues in tech that some
refer to as the black box problem. Now, this is
not the same thing as the black box that's on
board your typical airplane. In fact, i'll explain what that

(01:53):
is first because it's pretty simple, and then we can
move on. First, the black box inside airplanes is typically orange.
So right off the bat, we have a problem with nomenclature, right,
I mean, you had one job black box. Actually that's
that's not true. The black box has a very important
job and it requires a couple of things to work.

(02:15):
But the black box, which is orange, is all about
maintaining a record of an aircraft's activities in a housing
capable of withstanding tremendous punishment. Another name, or a a
more appropriate name really for this device, for the black box,
is the flight data recorder. Sensors in various parts of

(02:35):
the plane detect changes and then send data to the
flight data recorder, which you know, records them. If a
pilot makes any adjustments to any controls, whether it's the
flight stick or a knob or a button or a
switch or whatever. The control not only does whatever it
was intended to do, assuming everything's in working order, but

(02:58):
it also sends us signal that is recorded on the
flight recorder. So the job of the flight recorder is
to create as accurate a representation of what went on
with that aircraft as is possible. The Federal Aviation Administration
or f a A in the United States has a
long list of parameters that the flight recorder is supposed

(03:19):
to keep track of, more than eighty in fact, and
these include not just the aircraft systems, but what was
going on in the environment. So if a pilot encounters
a problem on a flight, or in a worst case scenario,
in the event of a crash, the flight recorder represents
an opportunity to find out what actually went wrong. You know,

(03:40):
was it a malfunction, was it pilot error, was it
whether the crash survivable? Memory units inside the heavy duty
casing of the black box are really meant to act
as a lasting record, So if the recorder is recoverable,
it gives investigators a chance to find out what happened. But,

(04:00):
as I said, that's not the black box I really
wanted to talk about for today's episode, so I'm not
going to go into any more detail about that. Now,
you could argue that the black box I want to
talk about is sort of the opposite of what we
find in airplanes, because in an airplane, the black box
contains a record of everything that has gone on, and

(04:22):
it can help explain why a certain outcome has happened.
In technology in general, we use the term black box
to refer to a system or technology where we know
what goes into it and we can see what comes
out of it, but we have no idea of what
went on in the middle of that process. We don't
have a way to understand the process by which the

(04:45):
device takes input and produces output. Now, in most cases,
we're not talking about an instance where literally nobody understands
what's going on with a device or a system. It's
more like the creators of whatever system we're talking about
have purposefully made it difficult or impossible for the average

(05:07):
person to understand or in some cases, even see what
a technology is doing. Sometimes it's intentional, sometimes it's not.
So Maybe purposefully is being a little strong there, but
that's often how it unfolds. Let's take a general example
that has created issues with a particular subsection of tech heads,

(05:28):
and those would be the gear heads, you know, the
people who love to work on vehicles like motorcycles and
cars and trucks and stuff, and what you might think
of as the good old days, at least from the
perspective of d I Y technology. There's plenty of other
things that were wrong back then, but in those terms,
the cars systems were pretty darn accessible. The motorist would

(05:52):
need to spend time and effort to learn how the
car worked and what each component was meant to do,
but that was actually in a evable goal. So with
a bit of study and some hands on work, you
could suss out how an engine works. You know, how
spark plugs cause a little explosion by igniting a mixture
of fuel and air inside an engine's cylinders, How that

(06:15):
explosion would force out a piston which connects to a
crank shaft, and how that reciprocating motion of the piston
would translate into rotational motion of the crank shaft that
could then be transmitted ultimately to wheels. Through transmission, you
could learn what the carburetor does, how the various fans
and belts work, and what they do you know where

(06:35):
the oil pan is and how to change out oil
and all that kind of stuff. What's more, you can
make repairs yourself. If you had the tools, the replacement parts,
and the knowledge and the time you could swap out parts.
You could customize your vehicle. You know, I've known a
lot of people who have taken on cars as projects.
They'll purchase an old junker and then they will lovingly

(06:58):
restore it to its former glory or turn it into
something truly transformational. And all of that is possible because
those old cars had really accessible systems. They were relatively
simple electro mechanical systems. Once you understood how they worked,
you could see how they worked or how they were

(07:19):
supposed to work, and you could understand what was going on.
Through that understanding, you could address stuff when things weren't
going well. And that's how cars were for decades. But
that began to change in the late nineteen sixties. But
it really accelerated. Uh No, pun intended in the nineteen seventies.

(07:41):
So what happened well in nineteen sixty eight, leading into
nineteen sixty nine, Volkswagen introduced a new standard feature for
their Type three vehicle, which was sometimes called the Volkswagen
fifteen hundred or sixteen hundred. There are a couple of
names for it. These were family cars, and volkswagen Is
intent was to create a vehicle with a bit more

(08:03):
luggage and passenger space than their type one, which was
also known as the Volkswagen Beetle. The feature for these
cars that I wanted to talk about was an electronic
fuel injection system that was controlled by a computer chip,
and the marketing for the this particular feature said that
this quote electronic brain end quote was quote smarter than

(08:28):
a carburetor end quote. Now, the purpose of a carburetor
is to mix fuel with air at a ratio that
is suitable for combustion inside the engine cylinders. But an
engine doesn't need exactly the same ratio of fuel to
air from moment to moment. It actually varies. As a
vehicle runs longer, or it travels faster, or it starts

(08:50):
climbing a steep hill, or you know, lots of stuff,
the ratio changes somewhat, and the carburetor manages this with
a couple of valves, one called the choke, another called
the throttle, among other elements. But it's all mechanical and
while it works, it's not as precise as an electronic
system could be. And that's where the Volkswagen system came in.

(09:11):
Volkswagen was pushing this as a more efficient and useful
component than a carburetor. It would prevent the engine from
being flooded with fuel and not enough air for combustion.
It would handle the transitions of fuel and air mix
ratios more quickly and precisely. It was the start of
something big, but were it not for some other external factors,

(09:35):
it might not have taken off the way it did,
or at least not as quickly as it did. Those
other factors, as I said, were external, and there were
a pair of doozies. One was a growing concern that
burning fossil fuels was having a negative impact on the environment,
which turned out to be absolutely the case. Cities like

(09:56):
Los Angeles, California, where getting around pretty much re buyers
having a car, we're dealing with some really serious smog problems,
and so organizations like the Environmental Protection Agency in the
United States began to draft requirements to reduce car emissions.
That would mean that automotive companies would have to create

(10:17):
more efficient engine systems. The other major factor that contributed
to this was the oil crisis of the nineties seventies,
which I talked about not long ago in a different
Tech Stuff podcast. This was a geopolitical problem that threw
much of the world into a scramble to curb fossil
fuel consumption because the supply was limited. The double whammy

(10:41):
of environmental concerns and the oil crisis forced a lot
of car companies to rethink their previous strategy, which was
pretty much more power, make bigger, much fast, go go, go,
go guzzle. That kind of is how they were thinking
back in the day. Turns out that was an unsustainable option.

(11:02):
And if you look back, especially at American cars during
the fifties and sixties, you see that trend of the
engines getting bigger and more powerful, and that was just
the way things were going until we started to see
these external changes come in, and so more car companies
began to incorporate computer controlled fuel injection systems. But this

(11:27):
move also marked a move away from the design that
made it, you know, harder for the d I Y
crowd to work on cars. Working on a damaged carburetor
was one thing. Dealing with a malfunctioning computer chip was another.
It didn't fall into the typical skill set of your
amateur mechanic. And of course we didn't stop at computer

(11:49):
controlled fuel injection systems. Over time, we saw a lot
more automotive systems make the transition to computer control. Today,
your average car has computer systems that control the engine
and the transmission, the doors, entertainment system, the windows, and
these systems are all individual and they have a name.

(12:09):
They're called electronic control units or e c USE. Collectively,
they form the controller area network or can c a N.
The connections themselves, the physical connections are called the can
bus b US, and that's really just a way of
saying these are the physical connectors that allow data to

(12:30):
pass from one e c U to another. And there
wasn't really like a central processing unit or anything. There
was no like central brain. It was more like ec
use that depend upon one another would send relevant information
to each other and not to anything else. So you know,

(12:51):
if the door sensor is showing a door is open,
it can send an alert to other systems so that
that information is is appropriately dealt with. Now, at the
same time that these individual systems were evolving, so too
we saw the rise of what would become the onboard
diagnostic system or o b D, and the o b

(13:14):
D keeps an eye on what's going on with the
various systems in the car, and it sends notifications to
the driver via dashboard indicators when something is outside normal
operating parameters. So let's say that this diagnosis computer picks
up that there's something hinky happening with the fuel air
mixture and it activates that pesky check engine light on

(13:37):
the dashboard that gives you next to no useful information.
The problem is that these days it can be challenging
or sometimes impossible to figure out exactly what caused that
check engine light to come on without the access to
some special equipment and expertise. The car systems have become
so sophisticated that it could be a challenge to figure

(13:59):
out what exactly has gone awry. Mechanics use devices called
O b D scan tools, and these tools connect to
the computer on board a car, and then the car
provides a an error code to the scanner. This, by
the way, took a long time to standardize because you've
got a lot of different car companies out there, and

(14:20):
obviously there was a need to move towards standardization so
that you didn't have to have fifty different scan tools
and fifty different code charts to deal with all the
different car companies. But the code corresponds to the specific
issue the O B D has detected. So not only
do you need a special piece of equipment to diagnose

(14:41):
what has gone wrong with the car, you also need
to know the codes, or else you haven't really learned anything.
If I get you know, a eight digit code, and
I don't know what that code refers to, then I'm
not really any better off than just looking at a
check engine light. On top of all of that, even
if you know what is wrong, you might not be

(15:02):
able to easily access the problem or fix it due
to the level of complexity, sophistication, and computerization of vehicles.
Not all cars, or motorcycles or or whatever are equal. Obviously,
some are a bit easier to work on than others.
Some require a lot of specific care though. For example,
if you're driving a Tesla, chances are the amount of

(15:24):
personal tinkering you're going to do on your car is
going to be fairly limited. Now I'm not saying it's impossible,
just that it's really challenging. So in general, we've seen
cars go from a mechanical system or electro mechanical system
that the average person can understand and work on, to
a group of interconnected, specialized computer systems that are increasingly

(15:49):
difficult to access. The cars have become a type of
black box. This can be extra frustrating for gear heads
who have actually an understanding of underlying mechanical issues that
could cause problems. They might even know how to solve
an issue if they can just get to it, but
they are finding themselves with fewer options in order to

(16:12):
address underlying issues. Now, cars are just one example of
technologies that have moved toward a black box like system.
There are lots of others. But apart from making it
harder to tinker with your tech, what's the problem. But
when we come back, I'll talk about some of the
pitfalls of turning tech into a black box. But first

(16:34):
let's take a quick break. We're back. So the car
transformed from a purely electro mechanical technology to one that
increasingly relies on computer systems. But the computer itself can
also be something of a black box for people none

(16:58):
the very early days of the personal computer. It was
hobbyists who were ordering kits through the mail and then
building computers at home. Typically, these hobbyists had a working
understanding of how the computer systems operated, you know, the
actual way in which they would accept inputs and process
information and then produce outputs. Before high level programming languages,

(17:22):
programmers also had to kind of think like a computer
in order to program them to carry out functions. As
computer languages became more high level, meaning there was a
layer of abstraction between the programmer and the actual processes
that were going on at the hardware level of the computer,
that connection began to get more tenuous. Now, I'm not

(17:45):
saying that programmers today don't have a real understanding of
how computers work, but rather that this understanding is less
critical because programming languages, computer engines, app developer kits you know,
software developer kits and so on, provide a framework that
reduces the amount of low level work programmers need to

(18:07):
do in order to build stuff as software. For the
average user, you know, someone who isn't learned in the
ways of computer science, computers are pretty much black boxes.
They work until they don't. You push buttons on a keyboard,
or you click on a mouse, or you touch a screen,
and you know, the computer does the stuff how it

(18:30):
does stuff like how it detects a screen touch and
then translates that into a command that is then executed
to produce a specific result. You know, that's not important
to us. We don't care or need to know how
that works in order to enjoy the benefits of it.
So for us, it's just the way things are. You

(18:51):
push that button and this thing happens. It just does.
The black box of the computer system, which can be
a desktop, a laptop, tab, smartphone, video game console, you
know whatever. It just takes care of what we needed
to do. That's not to say a computer is impenetrably
a black box. You can learn how they work, and

(19:13):
how programming languages work and so on. Computer science and
programming classes are all built around that. So while a
computer system is effectively a black box to the average user,
it wasn't made that way by design, and it can
be addressed on a case by case basis, depending on
the time, the interest of the individual computer user, and

(19:36):
their dedication to learning. But sometimes people will set out
to make technologies with the intent of them being black
boxes from the get go. These technologies are dependent in
part or in whole on of view skating how they work,
in other words, by obscuring it. Sometimes that's in an
effort to protect an invention from copycats, the whole idea

(20:00):
being that if you come up with something really clever,
you don't want someone else to come along, lift that
idea and do the same thing you are doing, but
selling it for you know, less money or something. But
other times you might be hiding how something works, specifically
with the intent to deceive. And now it's time to

(20:21):
look much further back than the nineteen sixties. It was
seventeen seventy in Europe. As the story goes, the European
world had seen a great deal of advancement in mechanical
clockwork devices. At that point. Clocks themselves, often powered by
winding a spring and keeping time using gears with a

(20:42):
reliable and consistent basis that was much better than earlier methods.
Even allowed people the ability to carry a time keeping
device with them. Phenomenal Based on similar principles, various tinkerers
had come up with toys and distractions that also ran
on clockwork, like gears and springs. Some of these were

(21:05):
quite elaborate, such as figures that appeared to play musical instruments,
and one of them was particularly impressive. It appeared to
be an automaton that could play expert level chess. The figure,
made out of wood, was dressed in Turkish costume, leading
to it being called the Turk, or sometimes the mechanical Turk.

(21:29):
If you were to sit down to play against the Turk,
you as an opponent, would move a piece and then
you would watch as this mechanical figure would shift and
move a piece of its own in response. And the
Turk was a pretty good chess player. It frequently beat
the opponents at face. Sometimes it would lose to particularly

(21:52):
strong players, but it held its own pretty darn well.
The man behind this invention was Wolfgang Fawn Kimplin, who
was in the service of Maria Teresa, Empress of the
Holy Roman Empire. He had been invited to view a
magician's performance in the court, so the story goes, and
the Empress had invited him specifically and afterwards asked him

(22:15):
what he thought, and allegedly he boasted he could create
a much more compelling illusion than anything this magician did. Now,
according to the story. The Empress essentially said, oh yeah,
we'll prove it buster, and he was given six months
to do just that. The turk was what he had
to show for it in six months time, and it

(22:36):
reportedly went over like gang Busters. The wooden turk stood
behind a cabinet, on top of which was the chessboard,
and Kimplin would reportedly open the cabinet doors and reveal
some gears and mechanics to prove that it was purely
a mechanical system. In fact, the gears were masking a

(22:58):
hidden compartment behind them, in which a human chess player
was sitting inside, hunched over, keeping track of a game,
using a smaller chessboard in front of them, and using
various levers to move the turk's limbs in response. Now,
a lot of folks suspected that something was up from

(23:18):
the get go, but you know, part of the fun
of a magic trick is just not knowing what's going on.
Some folks try very hard to figure out the process.
I am not one of them. Others are just happy
to be entertained by a very well performed trick. But
in a way, the turk was a kind of black box.

(23:40):
In fact, you could argue that a lot of magic
tricks pretty much fall into the black box category. The
process is purposefully hidden from the viewer. If we could
see what the magician was doing from beginning to end,
all the way through and without any misdirection, then it
would and be magic. We might admire the skill of

(24:03):
the magician how quickly they were able to do things,
but we wouldn't really consider it magical. So the output
is dependent upon people not knowing the process the inputs
went through. Now that's not to say that you can
appreciate a really good magic trick even if you know
how it's done. One of the best examples I know

(24:23):
of is Penn and Teller. They did a phenomenal version
of the cups and balls routine where they used clear
plastic cups and balls of aluminum foil to demonstrate how
cups and balls works, and you can watch the entire
time and even being able to see through the cups

(24:44):
and see the moves they're being made, Teller does them
with such skill that it is truly phenomenal. It doesn't
hurt that Penn is spouting off a lot of nonsense
at the same time and misdirecting even as you're watching
what's going on, I highly recommend you check it out
on YouTube. Look for pen and teller cups and balls.
You won't be disappointed. Now. The Turk, as far as

(25:08):
I can tell, was always intended to be an entertainment,
not necessarily something that was specifically meant to perpetuate some
sort of hoax. You wouldn't call a stage magician a
huckster or a con man or anything like that. Their
occupation is dependent upon misdirection and making impossible acts seem

(25:30):
like they really happened, but always or nearly always with
the implication that it's all an illusion or a trick
of some sort. But not everyone is quite so forthcoming
about the fact that the thing they're doing is done
through trickery. For the scam artist, the black box creates
an incredible opportunity. As technological complexity outpaces the average person's understanding,

(25:57):
the scam artists can create fake adjets and devices that
they claim can do certain things and then count upon
the ignorance of the average person to get away with it. Typically,
the go to scam is to convince people with money
to pour investments into the hoax technology in an effort
to fund whatever the next phase of development is supposed

(26:20):
to be, whether that's to bring a prototype into a
production model or to refine a design or whatever. But
the end result is pretty much the same across the board.
The corn artist tries to wheedle out as much money
from their marks as they can before they pull up
stakes and skip town, or they find some way to

(26:41):
shift focus or punt any promises on delivering results further
into the future, like that's the future me problem kind
of approach. Once in a blue moon, you might find
someone who was just hoping to make enough time to
come up with a way to do their hopes for
real zes, or at least to simulate it close enough

(27:02):
so that people are satisfied. That typically doesn't work out
so well. M. Thoromness. I'll get back to that. So
let's talk about some examples of outright scams that leaned
heavily on the black box concept, whether by having their
supposed and actual operating mechanisms hidden or by obscuring how

(27:27):
they really worked with a lot of nonsensical claims and
techno babble. One historical scam artist was a guy named
Charles Redheffer, who claimed to have built a perpetual motion machine.
If he had managed to do such a thing, it
would have been a true feat, as it would break
the laws of physics as we understand them. So let's

(27:48):
go over why that is just pretty quickly. For perpetual
motion to work, and thus for free energy in general
to work, a machine would need to be able to
operate with absolutely no energy loss, and for free energy,
it would have to generate that energy in some way.
A perpetual motion machine, once set into motion, would never

(28:13):
stop moving, you know, unless someone or something specifically intervened.
But if it were left to its own devices, it
would continue to do whatever it was doing until the
last syllable of recorded time. To borrow a phrase from
the Bard. Now, if we look at our understanding of thermodynamics,
we'll see that doing this in the real world is impossible,

(28:36):
or at least it would go against fundamental ways that
we understand regarding how our universe works. The first law
of thermodynamics says that energy is neither created nor destroyed.
Energy can, however, be converted from one form into another.
So if you hold a water balloon over the head

(28:58):
of a close personal friend, let's say it's Ben bowlen
of stuff. They don't want you to know. The water
balloon has a certain amount of potential energy. If you
let go of the balloon, that potential energy converts into
kinetic energy, the energy of movement. You didn't create or
destroy energy here, it just changed forms. So if you

(29:22):
have what you claim to be a perpetual motion machine
and you set it in motion, the energy you gave
that machine and that initial point should sustain it forever
and it would never have that initial energy change form
into some other type of energy that could then escape
the system and show a net energy loss for the

(29:45):
system itself. Remember, the energy is not being destroyed, but
it can be lost in another form. This means that
such a machine could not have any parts that had
any contact with one another, which would make it a
really strange machine. And that's because friction would be a
constant means for energy to convert from one form to

(30:06):
another form, in this case kinetic energy, the energy of
movement into heat. Friction is the resistance surfaces have regarding
moving against each other. So if the machine has any
moving parts at all, those parts will be encountering friction,
which means some of that moving energy will be converted
to heat and thus escape the system. So the overall

(30:29):
system of the machine itself will have a net loss
of energy. There will be less energy to keep it going,
which means gradually it will slow down and ultimately just stop.
As a result, it might take a long time if
the machine is particularly well designed, but it will eventually happen.
You would need some form of energy input to keep

(30:51):
things going on occasion, kind of like a little push.
Imagine that you've got a a swing like a rope
with a higher at the end of it. No one's
in it right now. You would have to give that
tire a little push every now and then to keep
it swinging, otherwise it will eventually stop. But that means
you wouldn't have a perpetual motion machine. There are other

(31:12):
factors that similarly make perpetual motion impossible. That the machine
makes any sort of sound, then some of the energy
of operation is going into creating the vibrations that make sound.
Sound itself is energy, it's kinetic energy, So that would
mean the machine as a whole would be losing energy
through that sound. A machine operating inside an atmosphere has

(31:35):
to overcome the friction of moving through air and the
list goes on. Moreover, if we could build a perpetual
motion machine, we'd be able to harness it for energy,
but only up to whatever the starting initial energy was
to get it moving in the first place. Because again,
energy cannot be created. We can build devices that can

(31:57):
harness other forms of energy and convert that energy and
to say electricity, But these are not perpetual motion or
free energy machines. These machines are just collecting and converting
energy that's already in the system. We're already present, so
they're not making anything. Reneffer, however, claimed to have built

(32:17):
a perpetual motion machine that could potentially serve as a
free energy generator. Now, if true, this would have been
an astonishing discovery. Not only would our understanding of the
universe be proven to be wrong, but we would also
have access to an inexhaustible supply of energy. Red Effort

(32:38):
showed off what he said was a working model of
his design in Philadelphia, and he was asking for money
to fund the construction of a larger, practical version of
his design. A group of inspectors from the city came
out to check out how this thing worked, and they
noticed something hinky was going on, even though red Haifer

(33:00):
was doing his best to run interference and prevent anyone
from getting too close a look at the machine. The
gears of the device, which was supposedly powering a second machine,
were worn down in such a way that it was
pretty clear that it was actually the second machine that
was providing the energy to turn the quote unquote perpetual

(33:21):
motion machine, not the other way around. So if we
were talking about cars, this would be like discovering that
the wheels turning were causing the pistons of the engine
to reciprocate in their cylinders. It's it's going the opposite way.
So the investigators then hired a local engineer named Isaiah
Lucas to build a similar device, using a secondary machine

(33:45):
to provide power to what would be the perpetual motion
type machine, and then they showed it to red Haifer,
who saw that the jig was up and he hoofted
out of town to New York City. He tried to
pull essentially the same scam there, this time using a
machine that was secretly powered by a hand crank in

(34:06):
a secret room on the other side of the wall. Uh,
technically it was just a feller sitting there with a
hand crank in one hand and a sandwich in the other,
providing the work to turn this machine. Robert Fulton, a
mechanical engineer of great renown, exposed the whole device as
a fraud when he pulled apart some boards on the

(34:28):
wall and revealed the man sitting there cranking away and
Red Heifer fled again. Records of what happened next are sketchy.
It seems he might have tried to pull the Sang
Dang scheme in Philadelphia again a bit later, but he
disappeared from the historical record after reportedly refusing to demonstrate
his new device. When we come back, I'll compare this

(34:50):
to what I mentioned before a Tharaos before we chat
about other concerns regarding the black box problem. But first
let's take another quick break. Okay, So, Saraos this is

(35:11):
the biomedical technology company that was founded by Elizabeth Holmes,
and she is currently awaiting a trial on charges of
federal fraud in the United States. The trial was supposed
to begin in August twenty twenty, but has since been
delayed until one due to COVID nineteen. Now, the pitch

(35:32):
for Tharaos was really really alluring. What if engineers could
make a machine capable of testing a single droplet of
blood for more than one hundred possible illnesses and conditions,
So rather than going through multiple blood draws and tests
to try and figure out what's wrong, you could get

(35:53):
an answer based off one little pin prick within a
couple of hours. Maybe you would even be able to
buy a theorophn nos machine for your home, kind of
like a desktop printer, and that would allow you to
do a quick blood test at a moment's notice. Maybe
you would get a heads up about something you should
talk to your doctor about, preventing tragedy. In the process,

(36:14):
you might learn that with some changes in your lifestyle,
you could improve your overall health or stave off various illnesses.
It would democratize medicine, giving the average person more control
and knowledge about their own health and giving them a
better starter point for conversations with their doctors. And yeah,

(36:35):
that's a great goal. It's a fantastic sales pitch, and
it did get homes and theorophness a lot of interested
investors who really wanted to tap into this because not
only is it something that you would want for yourself.
You could easily see that if this is possible, that
business is going to be like the next Apple. It

(36:57):
will become a trillion dollar company. Something that power full
would undoubtedly become a powerhouse. Now I've done full episodes
about the Ainos and how it fell apart because spoiler alert,
that's exactly what happened. The technology just didn't work. But
I think a lot of what happened with Aaronis was

(37:17):
largely dependent upon naivete ignorance and wishful thinking. Our technology
can do some pretty astounding stuff, right, I mean if
you had told me in two thousand that by the
end of the decade I would be carrying around a
device capable of really harnessing the power of the Internet
in my pocket and I would have access to it

(37:39):
all the time, I would have thought you were bonkers.
So if technology can do incredible things like that, why
can't it do something equally incredible with blood tests. The
idea is that, well, we're already seeing this amazing stuff happen,
why isn't this other amazing thing possible? And that is
dangerous thinking. It a weights all technological advances and developments,

(38:03):
and that's just not how reality works Moore's law, the
observation that generally speaking, computational power doubles every two years
has really helped fuel a misunderstanding about technology in general.
We extend that same crazy growth to all sorts of
fields and technology when that doesn't actually apply, and it

(38:26):
gives us the motivation to fool ourselves into thinking that
the impossible is actually possible. That I think is what
happened with Sarah. No. Nos, Now, I'm not saying Holmes
set out to deceive people. I don't know what she
really believed was possible, but based on what I've read

(38:48):
and seen and listened to, to me, it sounds like
she figured there was at least a decent chance her
vision would become possible, and so a lot of Saraphnoss activities,
in personal opinion, appeared to have been meant to stall
for time while engineers were working on very hard problems
to make the blood testing device work as intended. The

(39:11):
further into the process, the more of the company had
to spin wheels to make it seem like it was
making more progress than it actually was. The company had
raised an enormous amount of money from the investors, so
they were beholden to them. They had also secured agreements
with drug store chains to provide services to customers, so
they needed to perform a service. It had to show progress,

(39:35):
even if behind the scenes things had actually stalled out.
On top of that, you also have the reports of
executives like Holmes herself living the high life and really
enjoying incredible benefits of wealth because of the enormous investment
into the company. So that plays a part two therein

(39:55):
knows as operations were effectively a black box to the
outside world. It was meant to missdirect and give the
implication that things were working fine behind the scenes, while
the people who were actually there were trying to keep
up the illusion while simultaneously attempting to solve what appeared
to be impossible problems. At some point, based on how

(40:15):
things unfolded, I would say that executives that THEOS appeared
to be perpetrating a scam, not just you know, trying
to maintain an illusion while getting things to work. They
were actively scamming people. In my opinion, maybe they were
still holding out hope that it would ultimately work out,
but that doesn't change that it was a classic case

(40:38):
of smoke and mirrors to hide what was really happening,
such as using existing blood testing technology from other companies
in order to run tests while claiming that the results
were coming from actual THEOS devices. But again, this is
all my own opinion based on what I've seen and
read about the subject. A court will have to determine

(40:58):
whether or not Homes and other is actually committed fraud.
A lot of the technology we rely upon in our
day to day lives is complicated stuff, and there are
limited hours in the day. It's a bit much to
ask anyone to become an expert on all things tech
to figure out exactly how they work. Tech is also
becoming more and more specialized, so you might become an

(41:19):
expert in one area of technology and be completely ignorant
of another. That's not unusual because it takes a lot
of time to become an expert at specific areas of tech.
These days, they've become so specialized. But by overlooking the
how we can make ourselves vulnerable to bad actors out
there when it comes to technology. Maybe they are actively

(41:41):
trying to pull the wool over our eyes, or maybe
they're just simply misguided and they misunderstand how stuff works.
But either way, our own ignorance of how tech does
and what it does, and the limitations that we all
face based on, you know, the fundamental laws of the
universe as we understand them, that all makes us potential

(42:02):
marks or targets. That's where critical thinking comes in and
plays a part. Knowing to ask questions and to critically
examine the answers, and to ask follow up questions, and
to not accept claims at face value are all important traits. Now,
we do have to be careful not to go so
far as to embrace denialism. If we are confronted with

(42:25):
compelling evidence that supports the claim, we need to be
ready to accept that claim. I'm not advocating for you
guys to just go out there and say that any
and every claim is just bogus. That's not the point.
I'll close this out by talking about something we're seeing
unfold in real time around us, and that involves machine

(42:46):
learning and AI systems. Now, if you follow the circles
that report on this kind of stuff, you will occasionally
see calls for transparency. Those calls are to urge people
who are designing these machine learning systems and AI systems
to show their work, as it were, and to have
the systems themselves show their work. It's not enough to

(43:07):
create a system that can perform a task like image
recognition and then give us results. We need to know
how the system came to those conclusions that it produced.
We need this in order to check for stuff like biases,
which is a serious issue in artificial intelligence. Honestly, it's
a really big problem for tech in general, but we're

(43:29):
really seeing it play out rather spectacularly in AI. Now
i'll give you an example that I've already alluded to,
facial recognition technology. The U s National Institute of Standards
and Technology conducted an investigation in twenty nineteen into facial
recognition technologies, and it found that algorithms were pretty darn

(43:51):
good at identifying Caucasian faces, but if they were analyzing
a black or an Asian face, they were are less accurate,
sometimes one times more likely to falsely identify somebody based
on an image. The worst error rates involved identifying Native Americans.

(44:13):
So let's let that sink in, because when we talk
about issues like systemic racism, we sometimes forget about how
that can manifest in ways that aren't as intuitive or
obvious as the really overt stuff. We live in a
world that has cameras all over the place. Surveillance is
a real thing that's going on all the time. Police

(44:35):
and other law enforcement agencies rely heavily on facial recognition
algorithms to identify suspects and to search for people of interest,
and if those algorithms have a low rate of reliability
for different ethnicities, a disproportionate number of people who have
no connection to any investigation are going to be singled

(44:57):
out by mistake by these algorithms. Lives can be disrupted,
careers can be ruined, relationships hurt all because a computer
program can't tell the difference between two different faces. That
is a serious problem, and it points to a couple
of things. One of the big ones is a lack

(45:18):
of diversity on the design side of things. We've seen
this with tech for a long time. There is a
really critical diversity issue going on with technology. The people
who are building algorithms and training machine learning systems are
largely failing to do so in a way that can
be equally applicable across different ethnicities. Meanwhile, organizations like the

(45:41):
American Civil Liberties Union are calling upon law enforcement agencies
to stop relying on technology like this entirely pointing out
that the potential for harm to befall innocent people outweighs
the benefits of using the tech to catch, you know, criminals.
A machine learning system trained to do something like identify

(46:03):
people based on their faces needs to be transparent so
that when a bias becomes evident, engineers can go back
to the machine learning system and look and see where
it went wrong, and then train it to eliminate the bias.
Without transparency, it can be hard or impossible to figure
out exactly where things are going wrong within the system. Meanwhile,

(46:26):
real people in the real world are suffering the consequences. Now,
if we extend this outward and we look into a
future where artificial intelligence is undoubtedly going to play a
critical part in our day to day experiences, we see
how we need to avoid these black box situations. We
need to understand why a system will generate a particular

(46:48):
output given specific inputs. We've got to be able to
check the systems to be certain they are coming to
the right conclusions. Artificial intelligence has enormous potential to all
mean how we go about everything from running errands to
performing our jobs, but we need to be certain that
the guidance we receive is dependable. That's the right course

(47:11):
of action, And so I hope this episode has really
driven home how it's important for us to hold technology
up to a critical view. It's not that technology is
inherently good or bad, or that people are specifically acting
in an ethical or unethical way, but rather that without
using critical thinking, we can't be certain if what we're

(47:34):
relying upon is actually reliable or not. I also urge,
as always that we pair compassion with critical thinking. I
think there's a tendency for us to kind of assign
blame and intent when things go wrong, and sometimes that
is appropriate, but I would argue that we shouldn't jump

(47:56):
to that conclusion right off the bat. Sometimes people just
bad choices, or they are misinterpreting things, but they don't
have any intent to mislead. So while I do advocate
that we use critical thinking as much as possible, let's
be decent, nice human beings whenever we do that. If

(48:16):
it turns out someone is truly being unethical and trying
to deceive others, that's obviously a different story. But before
you know for sure, I say we employ that compassion,
and hopefully we are able to solve these problems before
they have these real world impacts, because the consequences of

(48:38):
those are dramatic and terrible and avoidable if we use
critical thinking. I hope you guys enjoyed this episode. Will
be back with other new episodes that will probably touch
on critical thinking, but they won't be, you know, completely
built around the concept. But if you guys have suggestions
for future topics I should tackle in tech Stuff, whether

(49:00):
it's a company, a trend, a personality and tech a
specific technology you want to know how it works, anything
like that, let me know. Send me a message on Twitter.
The handle for the show is tech Stuff H s
W and I'll talk to you again really soon. Text

(49:20):
Stuff is an I Heart Radio production. For more podcasts
from I Heart Radio, visit the i heart Radio app,
Apple Podcasts, or wherever you listen to your favorite shows.

TechStuff News

Advertise With Us

Follow Us On

Hosts And Creators

Oz Woloshyn

Oz Woloshyn

Karah Preiss

Karah Preiss

Show Links

AboutStoreRSS

Popular Podcasts

24/7 News: The Latest

24/7 News: The Latest

The latest news in 4 minutes updated every hour, every day.

Crime Junkie

Crime Junkie

Does hearing about a true crime case always leave you scouring the internet for the truth behind the story? Dive into your next mystery with Crime Junkie. Every Monday, join your host Ashley Flowers as she unravels all the details of infamous and underreported true crime cases with her best friend Brit Prawat. From cold cases to missing persons and heroes in our community who seek justice, Crime Junkie is your destination for theories and stories you won’t hear anywhere else. Whether you're a seasoned true crime enthusiast or new to the genre, you'll find yourself on the edge of your seat awaiting a new episode every Monday. If you can never get enough true crime... Congratulations, you’ve found your people. Follow to join a community of Crime Junkies! Crime Junkie is presented by audiochuck Media Company.

Music, radio and podcasts, all free. Listen online or download the iHeart App.

Connect

© 2025 iHeartMedia, Inc.