All Episodes

June 3, 2020 44 mins

It's time for another critical thinking episode! We are flooded with information and not all of it is reliable. Let's talk about developing the skills to think critically and separate the good data from the bad.

Learn more about your ad-choices at https://www.iheartpodcastnetwork.com

See omnystudio.com/listener for privacy information.

Mark as Played
Transcript

Episode Transcript

Available transcripts are automatically generated. Complete accuracy is not guaranteed.
Speaker 1 (00:04):
Welcome to Text Stuff, a production from I Heart Radio.
Hey there, and welcome to tech Stuff. I'm your host,
Jonathan Strickland. I'm an executive producer with I Heart Radio,
and I love all things tech and a lot of
you out there who have been listening to this show

(00:25):
for a while, I know that I like to get
on a soapbox about critical thinking, and I also talk
a lot about compassion and how I believe we really
need to employ both critical thinking and compassion together if
we are to be good human beings. In many places
around the world, including here in the United States, we

(00:48):
are seeing a play by play of what happens when
we ignore critical thinking and compassion when we do not
incorporate those qualities into our systems. When we do that,
then there are people who are specifically left out of
those systems who suffer because of those systems, from pandemic

(01:09):
responses to people on the sidelines who are judging the
actions and motivations of protesters who, let's face it, are
actually just asking that they be treated like human beings.
This is a problem that has a real world negative
impact on countless lives. We need to examine ideas and

(01:31):
claims carefully and make sure that those ideas and those
claims have merit. But we also need to remember that
other people are you know people. Some of them might
be knowingly engaged in creating or perpetrating falsehoods for whatever reasons,
and that's obviously bad. You know, if their motivations are

(01:52):
bad and they're acting on those bad motivations, they are
not good people. But others might be doing this without
realizing it or noing that the information that they're spreading
is wrong, which is also bad, but that is potentially
reparable with the right approach. So today's episode is going
to be another one about critical thinking and probably a
bit of compassion thrown in there too, and I'll frame

(02:14):
it within the context of technology for the most part.
I'm not going to do a rundown of you know,
current events. But please keep in mind that we should
be employing these qualities everywhere, not just when it comes
to should I buy this gadget or is it too
good to be true? Critical thinking really is about digging

(02:35):
beneath the surface level of a topic or an issue
to understand what's really going on. And it's related to
stuff like the scientific method, and it's a skill that,
like most skills, gets better the more you use it.
It's not the same thing as outright denial. I'm not
telling you to go out and just deny stuff. It's

(02:57):
really more about asking questions and looking for the answers
and evaluaiting those answers, and if the answers hold up,
then accepting those answers, even if the answer doesn't align
with what you had already guessed was going to be
the answer You've you've got to be willing to accept
it if it holds up under evaluation. And one other

(03:19):
thing I want to get out of the way is
that I don't mean to come across as being on
a high horse here. I want to tell you guys
something that I am not very proud of. It took
me an embarrassingly long time to really embrace critical thinking.
I didn't even hear the term until I was in college,
and my professors talked about it as though it had

(03:40):
been part of my education the entire time, and it wasn't.
And while I learned about critical thinking in college, I
was not good at it. I'd say it wasn't until
I was in my mid thirties that I started really
understanding it and employing it, and to this day I
still slip up. It's not unusual for me to realize
that something I had previously just accepted really required more

(04:02):
scrutiny and understanding. So you can think of this episode
as being inspired by my own slow journey toward critical thinking,
a journey that's still going on today. But let's start
with the obvious. We've all seen countless advertisements of variable
messaging and quality, and we all understand that ads are
intended to motivate us to spend money on goods or services.

(04:26):
That includes stuff like political ads too. By the way,
in those cases, you can think of the transaction as
a vote rather than an exchange of money, unless we're
talking about ads that are asking for campaign contributions, in
which case, yeah, that's similar to an ad trying to
sell you a new car or pair of sneakers or whatever.
So let's think about this from the perspective of someone

(04:47):
who designs ads. Their job ultimately is to create something
that motivates the audience to take action. Moreover, that action
typically means motivating them to spend some of their own
money on a product or service, and that could be
a tough hill to climb right, So someone who is
good at creating ads has to think about how to

(05:08):
frame their subject in a way that speaks to the
intended audience. Typically, these ads need to convince that audience
that whatever is being sold is something that the audience
actually needs. Now, if whatever is being sold is already
an established thing, that's not as difficult, right. I mean,
if I'm trying to sell microwave ovens, I don't have

(05:31):
to explain what a microwave oven is because those have
existed for decades. Moreover, folks tend to understand the use
case for a microwave oven and how they can really
be convenient and fast compared to other methods of food preparation.
So all of that groundwork has already been done for me.
But let's say I come out with a new product,
or a product that hasn't really established a firm foothold

(05:54):
in the market. Now I have to convince my audience
that the thing I have created or that I am
tasked to sell is actually useful. I will have to
demonstrate how it solves a problem or and these ads
are particularly clever, I have to convince my audience that
they have a problem. They were previously unaware of, you know,
the problem they never knew they had, and that my

(06:15):
product solves that problem. I'm sure you've seen examples of
ads or critiques of ads that have some variation of
the phrase it solves a problem and you didn't even
know you had. That, by the way, is a red flag.
But we're going to get to red flags a little later.
So that's one of the responsibilities that add creators must meet.
Another is to convince the audience that the specific variant

(06:36):
of the goods or services that the ad is about
is superior to all others in that same category. Going
back to the microwave oven, if I were to try
and market a new oven, I would need to convince
people that the oven I was producing was better than
the dozens of other variations that are already on the market,
or else I'm not going to get a good return

(06:56):
on my investment. I'm not gonna sell many ovens, so
I need to come people my microwave is the best
in some way. Maybe it's less expensive, maybe it's more
energy efficient, maybe it heats more evenly, or maybe I
just claim some combination of these things and then try
and go for that all of this needs to be
conveyed in an effective way to the audience, and that

(07:19):
means it needs to grab attention, It needs to be memorable,
and typically it needs to be short and really skilled
advertisers have become adept at sussing out some basic psychology
to trigger our impulses. So let's talk about some of
those red flags I just mentioned. I already said that
the concept of solving a problem you didn't know you
have should be a warning. In some cases, it may

(07:42):
very well be legitimate. The product or service might actually
take care of something that otherwise we had to do ourselves,
and we never even thought twice about it because in
our experience there was no way to offload that task.
But there is no shortage of products, particularly in the
as seen on TV category, that really don't solve anything.

(08:05):
They might create an alternative way to do something, but
they don't actually save any one time or effort. They
might not even work as well as the more established
methods they're supposed to replace, and they also cost you money.
You have to buy these things, right, so that would
mean for you it would be a net loss if
you bought one. As a rule, I assume any product

(08:27):
that relies on videos of actors being comically incapable of
doing something simple like opening a carton of milk to
fall into this category. Another red flag falls in line
with the old saying if it looks too good to
be true, it probably is. I think that's saying can
be applied to about of all the ads I see

(08:48):
whenever I visit Facebook. I can't tell you, guys how
frequently I've seen ads for stuff like electric guitars that
really go whole hog on this one. Now. I've been
doing a lot of research on guitars because I'm thinking
about finally learning to play one, but because I'm browsing
online to learn this stuff, and because I have not
taken better precautions to hide my browsing from Facebook, and

(09:11):
that is totally on me, I now get flooded with
ads for guitars when I look at Facebook. These ads
frequently claimed to sell some brand named guitars, stuff like Fender,
you know, the Fender Strato Caster or the Gibson Less Paul,
but for ludicrously low prices. A guitar that could cost
six dollars or more will list on one of these

(09:34):
ads for so talk about the temptation. If you're an
aspiring musician or someone who's been playing on an entry
level instrument, but you would love to own something more professional,
more high end, that sounds like an incredible offer. The
ads I saw typically said that whatever store was posting
the ad was going out of business, or that they

(09:55):
had to make room for new stocks, They had to
do a clearance sale of all this old inventory, and
the ad was creating a fiction to make it seem
as though there was some justification for those low prices,
to give at least some sense that this possibly could
be on the up and up. Now did those ads
tempt me? Yeah, I'm human, But I also didn't believe them.

(10:16):
Immediately I was suspicious. So I decided that I was
going to start opening up these ads into another browser
and look at the landing page. And I immediately grew
suspicious of them because the language was very similar to
other ones I had seen before, the details about the
page were really sparse, and also as soon as I

(10:37):
opened up more than one of them, I saw that
these ads, which were supposedly all for different companies and
different guitar shops, they all had different names and different
U r L s They all landed on pages that
were identical except for the U r L address, So
I wasn't getting redirected to the same website. No, there

(10:57):
were duplicates of this same web site. That's dead giveaway.
It's an indication that someone is casting as wide a
net as they can to try and trick people into
a transaction. So what is actually going on here? Well,
with many of these companies, most of which are running
places like China, the whole operation is a bait and switch.

(11:17):
The goal is to convince people that they are buying
something legitimate, like a Fender stratocaster, but it could be anything.
It could be a costume piece, it could be some
other gadget, it could be knives, you name it. But
what the company will ship will be some sort of
cheap knockoff. And most companies won't even attempt to hide
this fact. It's just apparent as soon as you receive

(11:39):
the package. These companies are very slow to ship those products,
and really the goal is to waste enough time so
that by the time you finally get whatever that cheap
knockoff is, it's really hard for you to cancel that transaction.
The transaction is already gone through, and it's hard for
you to get your money back. Many of them have

(12:00):
no return policy, or they do have a return policy,
but it's going to cost you more to ship the
product back to the company then it cost you to
buy in the first place. So that one hundred dollar
cheap knockoff of a Fender stratocaster would end up costing
me two hundred dollars to ship back to the company
for a refund. So it's a net loss. So you

(12:22):
don't get the thing you actually, you know, paid for,
and you're left with no real way to fix that situation.
And since the company is operating out of another country,
you don't really have many options to seek justice. Now,
in my case, I started marking these ads and reporting
them to Facebook as being misleading, and this would prompt
Facebook to essentially block those ads from appearing on my page. Now,

(12:44):
it didn't mean that Facebook removed the ad, It's just
that I didn't see it anymore. But here's the thing.
As soon as I would block one version of that ad,
I would see another one that was nearly identical to
the first one, and it would go to a different
you are l but it was on that exact same
style web page, the same layout, same pictures, everything, So

(13:05):
it was the same scam. And then I would report
that ad, and then I would see another, And I
must have gone through at least half a dozen of
these before I stopped seeing versions of that ad. So today,
if I go to Facebook, I don't see that one anymore.
But doesn't mean that there aren't other ads that are
following in this same pattern. And again, none of this

(13:26):
means that Facebook is actually not running those ads for
other people. Facebook's revenue is almost entirely dependent upon advertising.
It is not in Facebook's financial interests to come down
hard on advertisers and demand that they are transparent and honest.
If Facebook institute those rules, it would lose out on
millions of dollars of revenue every quarter. So Facebook has

(13:49):
a financial incentive to run those ads and to serve
as a platform for advertising that's both good and bad.
Bad ads will only hurt Facebook if there's a large
enough response among users, and typically that would mean that
users would have to abandon the platform or or just

(14:09):
not use it at all, or somehow block ads across
all the Facebook. The company's standpoint is really more hands
off and essentially saying caveat intour or buyer, beware the
responsibility of figuring out which ads are legit would fall
to you, the user, not to Facebook. This also applies
to Facebook's stance on misinformation. Now, if you are in

(14:30):
the United States, you've likely seen a lot of news
about Mark Zuckerberg saying that he doesn't want Facebook to
be an arbiter of truth. He doesn't want Facebook to
declare to users which posts reflect honest messages and which
are misinformation, And he frames in a way that makes
it sound like Facebook is trying to be an agnostic
platform upon which people can express themselves. Freedom of speech

(14:53):
the first Amendment in the United States, and that to
act otherwise, we mean Facebook would have to replace this
with a company of proved expression of ideas. So, yeah,
they're framing it is a very much a free speech
kind of issue. But is that what's really going on?
When we come back, we'll consider some alternatives by applying

(15:13):
some critical thinking. But before that, and I know this
is going to sound totally hypocritical, We're going to take
a brief break to thank our sponsor, and by the
end of this episode I'll have more to say about sponsors.
And ads in our podcasts in particular. But first, let's
take that quick break. Zuckerberg says he doesn't want his

(15:40):
company to dictate what is truth, and he frames it
in a way that seems to say, who are we
to decide what is truth and what is false? And
that's a fairly compelling argument, right. I mean, I don't
know about you, but I don't love the idea of
some centralized authority deciding seemingly arbitrarily, which are true and

(16:01):
which ones are false. We've seen throughout Facebook's history that
whenever the company tweaks their algorithm, people get mad because
they see only a selection of the posts their friends
are making. Every time it happens, I have to look
for the settings that allow me to view Facebook posts
by most recent rather than whatever factors Facebook thinks are

(16:22):
more important to me. And even then, I know I'm
not actually seeing everything my friends are posting in reverse
chronological order. I'm just seeing some of it. So I
think that not wanting to be the arbiter of truth
is part of the reasoning. I think that's a kernel
of truth with Facebook. But I also think it's not
the only, or perhaps not even the primary reason for

(16:43):
that decision. So let's think critically about Facebook and how
Facebook makes money. So, as we already mentioned, Facebook makes
money through advertising, and it doesn't have a huge incentive
to make sure that those ads that are running on
the platform are for legitimate businesses and purposes, though the
company definitely does have an interest in preventing anything that

(17:04):
would be damaging to Facebook from running on that platform.
But another component to this is that Facebook makes more
money the longer people stay on Facebook actively. So if
you can convince people to actively stay on Facebook for longer,
you can serve them more ads, and that means you
make more money. Therefore, it benefits Facebook to employee strategies

(17:27):
that keep people glued to the website longer. One way
to do that is to design algorithms that show posts
that are proven to get more engagement than others. And
by engagement, we're talking about stuff like posts that get
lots of comments or a lot of people are sharing
it to their own pages or friends pages, and the

(17:47):
number of likes that those posts are getting, Posts that
encourage people to participate and perpetuate essentially, and those are
the posts that keep people on Facebook longer. Thus, those
are the ones that mean more ads can be served
for longer amounts of time to those people. In the
business model for Facebook, all of this is a good thing.

(18:08):
So it's in Facebook's interests to design algorithms that can
identify the types of posts that are driving engagement and
then serve them to a larger spectrum of Facebook's users.
Remember I said earlier, I know I'm not seeing all
of the stuff my friends are posting, even when I'm
trying to view my Facebook in you know, chronological order. Well,

(18:30):
that's because Facebook is kind of choosing which ones I
see in which ones I don't, And usually it's trying
to pick the ones that are more likely to drive
high engagement, and those are the ones I'm seeing. Once
you understand that that's how Facebook works, you can start
to craft posts that take advantage of this property. You
can game the system. You can build stuff that by

(18:52):
its very nature, is designed to drive engagement. Now, it
doesn't have to be true. In fact, if you restrict
yourself to posting stuff that's only true, you're not likely
to drive that much engagement. But it does need to
be really compelling. Things that are more outrageous or tap
into basic emotional responses, whether positive or negative, tend to

(19:13):
work really well. So it is with that understanding that
people can craft messages that contain misinformation and have those
messages perpetuate quickly across platforms like Facebook, and those messages
can be really harmful. And whether the person who made
the message intended to push a specific agenda, or they
wanted to discredit some other point of view they don't

(19:36):
agree with, or they just wanted to find a way
to drive engagement for whatever reason, the motivations are immaterial.
They could range from irritating but mostly benign to downright
mean spirited, but the outcome ends up being the same.
Misinformation spreads like wildfire across Facebook and beyond. And if
you are encountering misinformation all the time, and it's popping

(19:58):
up like crazy in your feet and being re and
forced that way, you start to get the impression that
that stuff is the real deal, even if it doesn't
sound legit to you. The fact that Facebook can become
a fire hose of misinformation helps reinforce those messages, even
if the messages themselves lack merit. I compared to living
in a place like China or North Korea, where nearly

(20:19):
all messaging requires state level approval, and that means the
government gets to dictate what information the citizens can access. Well,
that sort of thing is bound to shape thoughts and opinions.
If you never get to see anything outside the approved stuff,
then you have no idea that it even exists. We
don't have this innate ability to know the truth from falsehood.

(20:42):
Facebook meanwhile, has little incentive to change anything. Facebook isn't
in the business of trying to make things better for
the average person. That's not their business model. There in
the business of making money, and that increased level of
engagement drives revenue, which means more money for Facebook. Acting
against that would go against the company's self interest. Moreover,

(21:03):
there's a concept that's important to Facebook's operations called safe harbor. Now, basically,
this concept is that if a company is acting as
a means of communication, it cannot be held responsible for
the stuff that's said using that company's services. So let's
say that if someone were to call up another person

(21:23):
on the phone and then they threatened that person. So
person A calls person B and issues a threat over
the phone, the phone company would not be responsible for
that horrible act. The phone company just runs the infrastructure
that was used to make the phone call. The same
sort of concept generally applies to platforms like Facebook. The

(21:44):
idea is that if a user were to post something
illegal or disruptive on Facebook, the company wouldn't be held responsible,
particularly if the company could show that it had acted
promptly in response to complaints or reports about the transgression.
If face book takes a stance on misinformation, there might
be a fear that it would be shifting in a

(22:05):
role of accountability, which might seem to undermine the safe
harbor argument. In addition to that, monitoring posts and labeling
stuff that is spreading misinformation as such, or removing such
messages or whatever the plan is, would require an investment
on Facebook's part. The company would have to spend time, energy,
and resources, and all that boils down to money to

(22:28):
create a method to identify and label or remove posts
that are promoting misinformation. So not only would Facebook be
cutting back on the types of posts that drive engagement
and thus generate revenue, the company would also have to
pour money into that effort. Asking a business to spend
money so that it can make less money is a

(22:49):
pretty tough argument. It's no wonder Zuckerberg as opposed to it. Yes,
it would be a huge responsibility to determine which messages
reflect reality and which do not. It would bring Facebook
under enormous scrutiny and criticism. Any person or group or
whatever that saw messages being labeled or removed would raise

(23:10):
an enormous stink over it, and the company would have
to deal with that. Right, So, let's say people who
are perpetrating misinformation see that their messages are being taken down,
then they can just make it an even bigger thing.
But more than that, combating misinformation hurts the company's bottom line. Now,
as I record this, news is breaking that dozens of

(23:31):
Facebook employees have staged a walk out in protest of
how Zuckerberg and other high level management at Facebook have
refused to intervene when it comes to misinformation and inflammatory posts. Now,
the particular posts that are at the center of this
protest originated from the office of the President of the
United States. So the stakes are very high on this one,

(23:53):
and I'm definitely dogging on Facebook a lot here, but
I want to be clear that it's not the only
corporation and to do this kind of stuff. The stage
had been set long before Facebook was even a thing.
Back in the nineteen sixties, financial analysts and big thinkers
started a gradual shift towards a focus on shareholder value,

(24:15):
pairing that idea with the idea that executives should receive
compensation that traded high salaries for stock options. The logic
was that if your leaders in your company have a
personal stake in the performance of the company, they will
make decisions that will be best for the company. But
it hasn't necessarily played out that way, as time and

(24:38):
again executives have made decisions that were incredibly positive from
a shareholder value and thus a personal finance perspective, but
had negative consequences for stuff like customer satisfaction or employee
conditions and more. In recent years, more leaders, including Jack Welch,
who was seen as the champion of shareholder value back

(25:01):
in the nineteen eighties. He was the CEO of General
Electric back at that time. Even Jack Welch has come
out and said that this is a bad approach and
that really the priorities should be customers first, then employees,
and then shareholders. They should be behind the focus on
customers and employees. But this is a view that's taking

(25:24):
a very long time to precipitate throughout business as a whole,
and in the meantime we see a lot of leaders
making decisions the result in enormous increases in wealth at
others expense. Now, why did I bring all that up
in the first place. Well, I did it to show
that by taking a particular stance or claim, in this
case Zuckerberg's explanation for why Facebook shouldn't weigh in on

(25:46):
whether or not something is accurate or true, we can
actually ask ourselves questions and look at the subject on
a deeper level, and at the end of the day,
you might find yourself agreeing with Zuckerberg's decision, even if
his explanation for that decision is not the full truth. Now,
I personally do not agree with his decision, but that's
my own point of view, and I don't wish to

(26:08):
argue that my point of view is the quote unquote
right one. It's just the one I happen to have.
I do think it's important to get as full and
understanding as possible before weighing in with opinions. Now, this
is a skill, and like all skills, we all need
to practice at it to get better and I include
myself in that, and you've likely encountered messages that reinforced

(26:29):
a previously held belief. These are very easy to accept
as the messages we see fall in line with what
we already believe. We're more inclined to accept those kinds
of messages right, and we're more likely to dismiss a
message that conflicts with something we really believe. This is
a natural human response, and that means it's also the

(26:50):
type of response other people can count on when they
craft messages. For example, i hold some pretty liberal beliefs,
and I'm not asking any of you to subscribe to
my beliefs. Some of you might have very conservative beliefs,
and I'm not going to tell you to change. Rather,
I'm framing it this way so that we understand where

(27:11):
I'm coming from. So, if I encounter a message that
indicates the President of the United States has said or
done something particularly upsetting to me, I am predisposed to
accept that that report is absolutely true. And partly that
has to do with history. The president does have a
very long record of saying and doing things that I

(27:32):
find upsetting, so precedent has conditioned me to expect such
things from him moving forward. But the rest is because
I hold certain biases, and if a message reinforces those biases,
I'm likely to buy into that message. And that's precisely
when I need to employ critical thinking. When I encounter

(27:54):
these messages, I actually do research to see if they're
true or relevant. Like I see a lot of messages
that contain quotes supposedly said by the President, and I
try to do research to make sure that those are
things that he actually said, to verify that somewhere the
President is recorded as saying that quote, and also to

(28:14):
see if there's more context around the quote and that
it's not something that was pulled out that by itself
is really awful, but within context isn't. And sometimes I
can't find any evidence of the quote outside of the
original message I saw, and then I'm just thinking, well,
this could just be made up because it's it's playing

(28:36):
to my expectations, and that's not enough. Likewise, I could
encounter a message that praises someone from the liberal side
of the spectrum, and once again, my bias means I'm
predisposed to accept that as fact because it falls in
line with my personal world view. But I also research
these messages to make sure they are real and within

(28:58):
the proper context. I do not want to blindly accept
that a message aligns that happens to align with my
personal worldview is true and then go on to perpetuate misinformation.
Doing this is not always easy. I find that, especially
when I'm particularly emotional, it's really challenging to remember to

(29:19):
apply critical thinking. But I also think that's when we
need to rely on it the most. On the mild side,
it might mean you're less likely to spread falsehoods, but
on the heavier side, it could mean you helped diffuse
a dangerous situation. When we come back, I'll go into
some more elements of critical thinking and compassion that are
applicable in the tech world and beyond. But first let's

(29:42):
take another quick break. In past episodes, you can hear
me get really emotional about instances in which opportunistic people
have used technology to take advantage of other in various ways.
The schemes all tend to fall back on the common

(30:04):
failures that we have as humans, and honestly, there is
nothing really new about those schemes. There are tricks that
con artists have been relying upon for centuries. The tools
of the trade are the same. What changes is whatever
is being sold and whatever platform you're using in order
to sell it. And it doesn't matter if we're talking

(30:26):
about a gadget or a philosophy. And I think all
scams are reprehensible, but I get particularly angry at those
that are taking aim at already vulnerable targets. For example,
people who are looking for a job. This is a
population of people who are seeking a means to earn
a living. They might be trying to land their first

(30:48):
steady job ever, or they might be trying to transition
from one job that's not so great into something that
they hope is better. Anyway, they are putting themselves out
there in search of opportunities, and that means they are
a vulnerable population, and some people find that irresistible. There
are numerous individuals and shady companies that take advantage of

(31:10):
job seekers. A common tactic is to advertise a job online,
but the advertised job is just bait. There is no
intent on offering that job to an applicant. In fact,
the job might not even exist. The goal is to
lure people into a job interview and then pull the
bait and switch, saying oh, you know, we already filled
that position, but we have a totally different job that

(31:33):
we would love to offer you. And thus they offer
up a different job to the applicant, and typically the
offered job, the new one is less desirable than whatever
was originally being applied for. It might have lower pay
and might have fewer benefits or both, but or it
may just be you know, less desirable in job duties.

(31:54):
But but the person doing the hiring knows that if
someone has taken the effort to go to a job interview,
and if they are looking for a job, if they're
in that place, they could be in need enough and
emotionally vulnerable enough to agree to this switch, even though
it wasn't what they were told when they first applied. Yeah,

(32:15):
that's one way to make me really angry, really fast.
And it is hard out there. I've been in a
steady gig since two thousand eight, and even from this
cushioned position, I know it's hard out there, and people
looking for a job fall into a category of people
who cannot afford to lose. They're looking for that chance

(32:36):
to land something steady. They're hopeful, and unethical jerk faces
will pounce on that there are other groups that are
even shadier, that are really just trying to get as
much personal information as possible and they have no job
to offer whatsoever. Or they are part of a pyramid
scheme that requires the job seeker to pour some of
their own limited money into that scheme, and the lure

(32:59):
is that if they bring other folks into that organization,
they will get some of the money of the people
that they they bring in. But those schemes are entirely
dependent upon convincing more and more people to join further
down the chain, and typically only a few folks towards
the very top ever really make anything, and it's at

(33:20):
the expense of everyone below them. That's classic pyramids schemes.
Now you've probably picked up on the fact that a
lot of this all has to do with catering to
what people want to believe is true, and that is
a huge part of it. But another is relying upon
people interpreting information incorrectly. For example, Moore's law. This is

(33:43):
a great one. Now let's begin by saying Moore's law
because came from an observation. It wasn't that Gordon Moore
declared this, you know, inherent law of the universe. Rather,
he was making an observation about how semiconductor companies were
able to fit more components on an inch square inch

(34:03):
of silicon wafer UH, and that there was a market
demand for doing more of that, which allowed them to
invest in the process of making, you know, the chips
that had even more transistors essentially on them. He was
he was looking at market trends that were supporting this
overall technological trend. But generally speaking, these days, we say

(34:26):
that Moore's law means that computers today are twice as powerful,
meaning faster at processing information as computers from two years ago,
and the computers from two years ago are twice as
powerful as the one from two years before that. So
every two years computer processing capabilities double. It's not a
hard and fast rule, but it's close enough to what

(34:48):
we observe to kind of serve as shorthand. But that
doesn't mean other technologies keep up at that same speed.
Battery technology, for example, doesn't. But because we've become accustomed
to computers evolving so quickly, it is easy for us
to make the mistake of extending that quality to other technologies.

(35:09):
And if you're an unscrupulous person you can jump on
that by making a spurious claim that sounds like it
could be possible and raking in the benefits. You guys
probably remember that I've done episodes on the company Thoroughness.
That's a company that aimed to produce a machine capable
of running hundreds of diagnostic tests on a single droplet

(35:31):
of blood. The ambition of the company and its founder,
Elizabeth Holmes was to create a device that would revolutionize medicine.
With a machine the size of a desktop printer, it
could become a household appliance and would allow people to
run a test quickly, and then they could share that
information with their doctors. They could lead healthier lives. They
can micro manage their health. It's kind of in line

(35:54):
with that whole quantitative approach we were taking for a
very long time with all this stuff like fitness tracker
that would give us detailed information about things like how
many hours sleep we got, how many times do we
toss in turn, it's in that same vein, and potentially
this would mean that a user would pick up warning
signs early enough to be able to take action before

(36:14):
things get really serious. It could save lives, and it
would make a crap ton of money. But there was
just one small problem. The technology didn't work, but a
lot of people believed it could work. I mean, we've
got some phenomenal technology at our disposal right now. Right
if you have a smartphone, you've got a computer that

(36:36):
fits in your pocket. It's got a ton of processing
power compared to even old mainframe computers, and it could
connect to the internet. It can give you amazing communication
abilities as well as access a truly enormous amount of
information that's been gathered around the world. It's a relatively
mundane piece of technology at this point, right And when

(36:58):
the amazing comes mundane, the impossible starts to sound plausible.
So why couldn't a machine use a tiny amount of
blood as a sample for hundreds of tests? I mean,
why couldn't it return accurate results in a matter of hours.
We've got technology that lets us shoot ultra high resolution
video using a phone. We've got tech that we can

(37:22):
talk to and give commands too, and it responds to us.
It stands to reason that we should be able to
run these sorts of tests on that small at sample
of blood, And that leap of logic is the problem.
Our capabilities in one area can lead us to believe
we are equally capable in unrelated areas. It would be

(37:42):
like if I were a world class athlete in a
specific sport and I thought, just because of that, I'm
automatically equally as amazing and some other unrelated sport. Even
very intelligent people got caught up in this promise. Very
successful people were caught up by this lie, and the

(38:03):
promise was just so darn positive. It would be amazing
if we could run that many tests on such a
small sample of blood. And it would be even more
amazing if we could come up with a consumer version
of that technology that the average person could have in
their home. That kind of gadget could potentially save millions
of lives. Of course, you want that tech to exist

(38:24):
and to work, but wishes don't make things true. So
we've got a technology that would be awesome if it existed.
We've got a culture and business that revolves around risk.
We've got a startup culture that glorifies the concept of
fake it until you make it, which means you come
up with a goal and then you raise a whole
bunch of money, and then you flail around trying to

(38:46):
achieve that goal and hopefully something you do sticks somewhere
along the line. You manage some level of success before
your money runs out, and while the money is rolling in,
you might as well live high on the hog. If
it doesn't work out, well, that's a bummer. But in
the startup world, failure is common. Fail fast is the
name of the game. Now. I don't know if Paronisis

(39:10):
founder Elizabeth Holmes genuinely believed she was going to be
able to create the tech that she envisioned. If she did,
that wouldn't really surprise me. Like I said, the idea
is really attractive, and our technology is already so incredible
that you could be swayed to think this could be possible.
I also don't know if she's still believed in it

(39:30):
by the end of that whole story, when numerous investigations
and leaks revealed the extent to which this technology definitely
did not work, and the extent to which the company
moved to hide that fact. But I do know this,
just because you believe in something super hard doesn't mean

(39:51):
that thing is magically true. Holmes herself is currently in
the middle of a legal battle over the entire Theonis
fallout with a new odd charge that was filed against
her recently, like as of the week of the recording
of this, but for more than a decade her company
was able to get significant investments from numerous wealthy individuals

(40:11):
and organizations. The promise of a truly huge payout was
tempting because if the technology worked, this would be a
gold mine. Better than that, it would create a return
on investment that would be impossible to guess at. You
might be thinking, Wow, I'm gonna pour a hundred thousand
dollars into this, and I'm gonna be breaking in ten

(40:33):
million dollars in no time or more, and that was
enough to shut down critical thinking. The message of Pharaonos
had multiple vectors of appeal. It appealed to your sense
of innovation. It appealed to the idea of democratization of medicine,
so sort of an altruistic appeal. Simultaneously and somewhat uh conflictingly,

(40:53):
it also appealed to greed, and lots of people allowed
themselves to be swayed by this. Now need to wrap
up this episode, but I said earlier that I would
take some time to talk about sponsors and ads that
I run on this show. Now. Most of the time,
our sales department here at I Heart runs potential ad
deals by me for my approval, and I take this seriously.

(41:18):
I try to look into each company and product or
service before I give my approval. I don't want to
pass along something that's misleading if I can help it.
Sometimes I don't get a choice in ads, but that's
a rare thing. So if I voice an AD for
something that's misleading or whatever, chances are the fault is mine.

(41:38):
I try to avoid it. Sponsors and advertisements are a
necessary part of what I do because without them, this
show would be an expense, but there would be no
way to generate money for it. We would just be
spending money to make the show and get nothing back,
and that means this show would go away. So I
try to balance the financial responsibilities I have as an

(41:59):
employed ye with my responsibilities I have as someone communicating
to you guys. And I wish I could share with
you a list of some of the deals that I've
said no to, but that would be really unprofessional of me.
Let's just say it's not a short list. But even
with all that said, I encourage people to think critically

(42:20):
about all messaging. You don't have to deny stuff right
out of hand, just question it, even if it aligns
with your beliefs. Heck, especially if it aligns with your beliefs.
In the long run, it can help your case. If
you're able to suss out the truth from the lies,
you can build your argument for your side more effectively,

(42:41):
and you don't leave yourself open to attacks on credibility. Moreover,
we need to exercise those skills to make sure we're
just being good humans, that we are taking care of
ourselves and of each other. Using those two things, critical
thinking and compassion, we can recognize when others are truely
in need. Then we can also help identify the best

(43:03):
ways to help them. If we think critically, we might
realize that changing a profile picture or tweeting a message
might not really be enough. It's more of a performance
than an action, and that we can and should do more,
even if that's just to shut our yaps and listen,

(43:24):
and guys, I'm listening. I promise if you guys have
any suggestions for future episodes of tech Stuff, whether it's
a specific technology, maybe it's a company, maybe it's a
personality in tech, maybe it's just a trend in tech
in general. Reach out to me and let me know.
You can get in touch on Facebook or Twitter. The

(43:45):
handle at both of those is text stuff H s
W and I'll talk to you again really soon. Text
Stuff is an I Heart Radio production. For more podcasts
from I Heart Radio, visit the i Heart Radio app,
Apple Podcasts, or wherever you listen to your favorite shows.

(44:11):
H

TechStuff News

Advertise With Us

Follow Us On

Hosts And Creators

Oz Woloshyn

Oz Woloshyn

Karah Preiss

Karah Preiss

Show Links

AboutStoreRSS

Popular Podcasts

24/7 News: The Latest

24/7 News: The Latest

The latest news in 4 minutes updated every hour, every day.

Crime Junkie

Crime Junkie

Does hearing about a true crime case always leave you scouring the internet for the truth behind the story? Dive into your next mystery with Crime Junkie. Every Monday, join your host Ashley Flowers as she unravels all the details of infamous and underreported true crime cases with her best friend Brit Prawat. From cold cases to missing persons and heroes in our community who seek justice, Crime Junkie is your destination for theories and stories you won’t hear anywhere else. Whether you're a seasoned true crime enthusiast or new to the genre, you'll find yourself on the edge of your seat awaiting a new episode every Monday. If you can never get enough true crime... Congratulations, you’ve found your people. Follow to join a community of Crime Junkies! Crime Junkie is presented by audiochuck Media Company.

Music, radio and podcasts, all free. Listen online or download the iHeart App.

Connect

© 2025 iHeartMedia, Inc.