All Episodes

March 20, 2023 48 mins

From a pair of generals paralyzed by bad communication to a trolley hurtling out of control, we look at some classic thought experiments and how they pertain to technology. Plus, are we living in a simulation?

See omnystudio.com/listener for privacy information.

Mark as Played
Transcript

Episode Transcript

Available transcripts are automatically generated. Complete accuracy is not guaranteed.
Speaker 1 (00:04):
Welcome to Tech Stuff, a production from iHeartRadio. Hey there,
and welcome to tech Stuff. I'm your host, Jonathan Strickland.
I'm an executive producer with iHeartRadio, and how the tech
are you? You know, often when we talk about tech,
we reference various thought experiments, hypothetical situations, philosophical problems, and

(00:29):
game theory. And this can get a little bit confusing
if you've never actually studied any of those things, and
people are just kind of off handedly spouting off terms.
So today I thought i'd cover a small handful of
them as a sort of foundation for future discussions. Keep
in mind, there are tons of these, and I'm only

(00:51):
covering like the teeniest, tiniest number of them. Some of
these I have talked about extensively another episode, so I'll
try to go a little light with them on this episode.
But some of them are brand new to me, or
I had only heard the name of them but had
never actually researched the actual scenario or thought experiment. Now,
these thought experiments in general, not the ones specifically we're

(01:14):
talking about today, but the practice of thought experiments date
back quite a ways, at the very least to ancient Greece,
because we have records of them, so back then they
were used to conceptualize complex mathematical problems and to give
people a chance to consider consequences outside of a real

(01:35):
world situation. But first up, I thought we would talk
a little bit about game theory because we actually saw
a real world version of game theory play out just
a couple of weeks ago. And arguably you could say
this is only tangentially related to technology, but it did

(01:55):
have and continues to have a massive impact on tech.
So the game theory situation that we're going to really
talk about is known as the prisoner's dilemma. Folks were
referencing this one in the wake of the Silicon Valley
bank collapse, which is why I say this is only
tangentially related to tech, because it's really about a run

(02:18):
on the bank, and that bank just happened to be
a really important bank to the tech industry. But the
dilemma has its roots in the mid twentieth century, and
the basic version goes something like this. A pair of
suspected criminals are caught by police. They become the prisoners,
and the police plan to interrogate each of the prisoners separately.

(02:41):
The prison sentence for these criminals if they are found guilty,
which is just taken as as the most likely outcome
would be ten years. So if they are convicted, they
get ten years in prison. However, each of them is
told separately that there are different possible outcomes depending upon

(03:03):
their cooperation or lack thereof with the police. They just
aren't allowed to talk to each other, so the two
prisoners have no way to communicate with one another. They
have to make up their minds individually, and the four
possible outcomes are this, neither suspect talks, neither prisoner confesses,

(03:25):
and in that they each get only two years in
prison because the cops won't have enough evidence to put
them away for the more serious crime. So they'll have
to go to prison, but i'll just be two years,
not the full ten. However, if one of them stays
silent but the other one confesses, well, then the one

(03:45):
who confesses gets to go free and the silent prisoner
has to serve the full ten years of the sentence.
If both prisoners confess, then each of them will get
five years. So you get your four possible outcomes. Suspects
A and B keep the trap shut and each of
them serve two years. Suspect A talks but Bee keeps quiet,

(04:08):
which means A goes off scott Free and B goes
to the pokey for ten years. Suspect A holds their tongue,
but Suspect B sings like a canary, and that time
Suspect B strolls out to freedom and Suspect A rots
away in prison, or they both go blabbing and they
both end up serving five years. Now, collectively, the most

(04:30):
beneficial outcome is to serve only two years by not
talking at all, and then the next best outcome for
both individuals collectively would be that they both talk and
they both have to serve five years, but not the
full ten. However, individually, if we're not talking about collectively, individually,
the best outcome is you talk and hope the other

(04:53):
one doesn't. That way, you can put on your dance
and shoes, just strut on out of the building and
the other one's days behind. But worst case scenario, you talk,
the other person also talks, and you just serve half
the full sentence each. Actually, worst case scenario is you
decide not to talk and the other person does talk,

(05:14):
and then you're looking at ten years in jail. Well,
when it came to Silicon Valley Bank, the real world
scenario went like this. If everyone had remained chill, their
money would have been safe. SBB had over extended its
investment and government backed securities, which would take years to mature.

(05:36):
And this was because SVB wasn't issuing as many loans
once interest rates had gone up significantly. Venture capitalists weren't
seeking loans so much. Plus they were already flush with cash,
and loans are the way that banks make money for
the most part. So instead the banks were investing in
longer term investments that would have a modest payout, but

(05:59):
it would have a payout once the investments matured. But
that would mean that everyone If they just you know,
cooled their jets and kept their money in place, things
would probably have been fine. SBB would have likely survived,
but instead the prisoners the customers of SVB in this case,

(06:21):
chose to pull their money out because the thinking went
something like this, if everyone else takes out their money
and I don't, and then then SBB could shut down,
and then my money will be stuck, it'll stop existing,
or I won't be able to get to it, and
I need my money. It's what lets me buy all
that stuff like private jets and politicians. So I'm gonna

(06:43):
go get my money out before i lose out on
that option. The problem was is that some very big
players who had a whole lot of money in SBB
did this, including the heads of venture capitalist groups, who
then urged all of their clients to do the same,
and so there was a run on the bank and
SVB could not cover all the withdrawals without selling off

(07:04):
assets at a huge loss, and that put SVB in
a very precarious situation, in fact, precarious enough that the
US government had to swoop in and take over the
bank and guarantee all the customers that they would still
be able to access their money. So enough prisoners took
the sure thing principle and they screwed over everybody else

(07:25):
in the process, which not a big surprise because generally
there's an agreement that taking the tactic of confessing makes
the most sense from a game theory perspective, and that
loyalty has no place in the game, that if you
are loyal, the best you can hope for is two
years in prison and the worst is ten. So it

(07:45):
makes more sense to confess and either screw over the
other person or you both get screwed. So that's the
thinking behind the prisoner's dilemma, and like I said, we
kind of saw it play out with the collapse of
the Silicon Valley Bank. Now, one of my favorite thought
experiments that has some connection to the tech industry is

(08:07):
the Ship of Theseus, which dates back to at least
a century of around five hundred to four hundred BC.
And the Ship of Theseus idea goes like this, So
you had the Greek hero Theseus. He had a ship,
and he ended up docking that ship, and people were
preserving the ship for his eventual return. And long after

(08:29):
the hero himself had faded away, the ship remained preserved.
But of course, over time, pieces of the ship need
to be replaced. You know, maybe the sails rip and tear,
so you need to put new sails on the ship.
Maybe rot sets into part of the deck, so you
have to rip that out and replace it with new
planking and so on. And eventually, over the course of time,

(08:50):
maybe it's decades, maybe it's even centuries, you gradually replace
every single piece of the ship, so you ultimately arrive
at a point where no element in the ship of
Theseus is the original component. Theseus himself never touched anything
on the ship at this point, So would you still

(09:11):
call it the same ship? If not, when did it
officially stop being the ship of theseus? Because obviously, if
you took possession of the ship right after Theseus got
tossed off a cliff by lack of medes, and then
on your first day you had to replace a sale,
you would still call it the ship of Theseus? Right,

(09:32):
is just one sale that you replaced. The ship itself
is still the same, And you know a single sale
does not change the identity of the overall ship. But
is there a point where that does happen where the
ship's identity changes? In tech, one way this thought experiment
can manifest is when a company undergoes digitalization. Companies that

(09:54):
have been around for decades have various systems and processes
in place that predate digital realization. And I always struggle
over that word, so you're gonna hear me stumble a lot.
But as a result you to modernize. The leaders of
these companies have to decide when, if ever, to convert
old processes into new ones in order to stay current

(10:15):
and to avoid problems with outdated legacy components. Whenever I
look at really big companies that have been around for
like a century, I am often left wondering how they
handled these transitions, or even if they tried to, because
these legacy systems are often crucial to the business. The
business grew around these systems, and so changing the system

(10:38):
is hard because you have so much other stuff that
grew up around it and depends upon it, and the
hardware becomes outdated. It can even get difficult or sometimes
impossible to maintain or replace old equipment simply because no
one makes that thing anymore. You know that particular computer

(10:58):
system may not even be available at any rate, so
you have to figure out a different way to do things.
Digitalization makes it easier to track progress and identify bobblenecks
and such, but it also might mean having to take
a slightly different approach to try and get a similar
result as your legacy systems. So the new version isn't

(11:21):
perfectly recreating the old one. It's just trying to reach
the same conclusion. But in the process new things might
pop up, unexpected complications or diversions, and thus he might
be left wondering if the IBM of today is really
the same company as the IBM that was founded in
nineteen eleven. Actually nineteen eleven was the founding of the

(11:43):
Computing Tabulating Recording Company, which was the precursor to IBM,
But you get my point. There are some thought experiments
that are specific to computing problems. One of those is
called the two Generals problems, which focuses on the issues
you face when you try to establish communications across unreliable connections.

(12:05):
So when networks this becomes a big deal. Right. The
basis of Internet connections largely falls to figuring out the
most reliable way to deliver information to another system that's
connected to that network. But the basic two General's problems
goes something like this. You have an enemy that's in
control of a centralized valley, and you have two generals.

(12:26):
Each of them are in charge of their own army.
Each army is in a valley that neighbors this central valley,
So essentially you're flanking the enemy. You've got one army
on the left, one army on the right. They're both
in their own valleys, and the enemy is in the
valley in between the two so your goal is to
establish a time for both generals to attack the enemy

(12:47):
in the middle at the same time, because the enemy
is too well entrenched and too strong for either army
to defeat it on its own. If only one of
your army's attacks, it's going to get wiped out. So
really the only hope for your victory is to have
a coordinated attack. Now complicating things is that the only
way the two generals are able to communicate with one

(13:07):
another is to send a messenger across enemy lands to
reach the other general, and any messenger runs the risk
of being caught in the process. So let's say that
you determine before they set out. The General A is
in charge of establishing an attack time, and so General
A writes, we attack at dawn in three days and

(13:28):
sends a messenger out to travel to General B. Well,
General A doesn't know if the messenger makes it to
General B. So in three days at dawn, there's a
risk the General B never got the message, and if
General A attacks, it's going to mean a loss because
B won't be participating in a simultaneous attack. But what

(13:49):
if General B did get the message and then sends
a confirmation back message received, we attack in three days
at dawn. Well, that messenger might end up being intercepted.
So now in three days time, General B isn't sure
if General A knows that everything is good to go
as scheduled. So maybe General B hesitates to avoid defeat

(14:11):
because General B doesn't know if generally is aware of this.
Of course, General AA, receiving the reply could try to
send their own message back to General B. But maybe
that messenger gets cut along the way, and this goes
back and forth, and the challenges You cannot be confident
that any one message made it to the correct destination,
So how you design a communication system where you're reasonably

(14:34):
certain that messages are going through becomes a challenge. The
thought experiment shows that uncertainty in communication systems aren't a
problem that can necessarily be outright solved, but it perhaps
can be mitigated to a point where all sides are
comfortably communicating. Okay, we've got more to say about thought
experiments and philosophy, but let's take a quick break. All right,

(15:06):
we're back. Let's talk a little bit more about computing
thought experiments. We just mentioned that it's hard to be
certain about communications in uncertain situations. You know, it's you
can do the best you can and limit the chances
for messaging to fall through, but you can't ensure that

(15:29):
it is perfect. That's the purpose of the two generals
thought experiment. But let's talk about a different one. Let's
talk about philosophers and PISKEETI. Seriously, there is a thought
experiment called the dining philosopher's problem. This one is more
about the sharing of computational resources and avoiding a deadlock situation,

(15:51):
or a situation in which one computational process is hogging
all the resources and all the other processes that need
to run that same machine can't. So to understand this,
let's first recall that back in the old days, before
we got to microcomputers and many computers, a computer system
generally consisted of a big, centralized mainframe computer that you

(16:16):
would access through a data terminal, a dumb terminal, which
could include very basic input devices like a keyboard and
a very basic output device like a monitor. But the
dumb terminal wouldn't have any computational ability itself, like it
would look kind of like a desktop computer, but instead

(16:36):
it's literally just a monitor and a keyboard. It's connecting
to this centralized computer, which likely has lots of other
dumb terminals connected to it, with other people also accessing
the centralized computer. So what you really have is a
shared computational resource that is distributed across all these different
dumb terminals. Computers typically handled all this by dealing with

(17:01):
each terminal one at a time, in sequence, but at
really fast speeds, so it felt that it was pretty
responsive and that you were doing everything more or less
in real time, and it was called time sharing. Every
person at a data terminal was sharing time with this computer.
Time was really precious with these things too. But how

(17:24):
do you make sure that all the different processes slash
terminals are able to access the computer fairly? How do
you avoid situations where all the demands are coming in
at a point that effectively locks the entire system where
it can't do anything. This brings us to the dining
philosopher's problem. So imagine we've got ourselves a big round

(17:46):
table and we have placed five plates around this table.
There's a chair at each plate, and in between the
plates there is a single fork, so you have plate
for plate, fork plate fork plate, etc. So five plates,
five forks, five chairs. So far, so good, But the

(18:10):
problem is that the philosophers who are coming to dine
are there to eat spaghette. And the big old heap
and plate of spaghett is glorious, but the only way
to eat it is to use two forks simultaneously. So
you need a fork in each hand in order to
be able to wind up enough spaghett to shove into

(18:33):
your gob and you can eat your spaghetti. When you're
not eating, you can think, because you're a philosopher, so
you're either thinking or you're eating. That's all you're doing
at this table. But obviously if you go without eating
for too long, you'll starve yourself to death. Now, since
you need two forks to eat, and there are only
five forks at the table, if you grab the fork

(18:56):
on your left and the fork on your right, it
means that the people sitting to your left and to
your right they can't eat right they have access at
most to one fork. They don't have access to the
second one, because those are the ones that are in
your hands. Once you put the forks down, they become available,
and then the people on either side can pick that
fork up and potentially eat unless of course, the fork

(19:18):
on their opposite side has already been taken by the
other two people at the table, so they could be
out of luck, and you have to figure out how
to juggle this. Worse than that, though, Let's say that
you've set a rule that whenever a fork is laid down,
you pick it up immediately, so that you're always at
least half ready to start eating. But everyone follows this rule,

(19:41):
and at the very beginning of the meal, everybody reaches
over to their right and picks up a fork. Well,
now all five forks are in hand, five different hands
in fact, which means no one can eat or think
because they only have one fork in their hand. They
need two forks to eat. If they set down the fork,
then they're going to lose it, so they're holding onto it.

(20:04):
It deadlocks the whole system. So how do you fix this? Well,
they are actually different solutions to this problem, and they're
all meant to try and avoid deadlock. And then there
are other solutions that are meant to ensure fairness, because
that's not a guarantee in this system. For example, you
might set a rule that ends up assigning a number
to each one of the forks and maybe the rule

(20:26):
is that you can only grab the lower number that's
in front of you first, and then you can grab
whichever fork has a higher number, and everyone grabs their
lowest number. But someone's going to be left without being
able to do that because they will only be left

(20:47):
with the number five fork. There is no lower number four.
They cannot follow the rule because the number four has
been grabbed by someone else. This allows one person to
grab the number five fork because they've already grabbed the
lower number, and they can eat. Then they can sit
down their fork, and this can then continue with everybody
getting a chance, assuming you have other rules in place

(21:08):
to help guide things. Now that's just one approach, mind you,
there are lots of others, Like there's one where there's
an arbiter who is there to determine when each person
is allowed to eat. They essentially are the ones given
permission to grant the privilege of eating too specific people
and to make sure that no one overheats like. That's

(21:29):
another approach. So the point of the whole thought experiment
is to get people considering the challenges using a limited
number of resources for multiple entities in such a way
that no one goes without for too long, and there's
a means of managing things. It's meant to give computer
scientists heads up on things they have to consider when
they're designing their systems. So it's really thought experiments that's

(21:53):
where they're really valuable. Is that it's before you started
to build anything, right, you haven't dedicated asset and time
and effort to building something. You're thinking it through first
and saying, how do I avoid this perceived problem so
that we don't actually encounter it in the wild and
then have to figure out a solution. How can I
solve it just by thinking about it? That is what

(22:17):
these thought experiments give you the opportunity to do, assuming
that the thought experiment is constructed properly, which is not
always the case. There are thought experiments that later on
people picked apart and said, this thought experiment is predicated
upon assumptions that we can't be certain are true, and

(22:38):
therefore you can't really use this thought experiment without acknowledging
that it could just all be for nothing because the
actual primases aren't proven. But then there are the various
thought experiments and ethics problems that come into play when
you start to talk about artificial intelligence. Now, I've said

(22:58):
many times in this show, AI covers a huge amount
of ground. It's a very I think AI is a
dangerous term. Not dangerous in the sense of it potentially
being harmful to humans, but rather it's such a huge
discipline that it's very easy to be reductive when you're
talking about AI, and to think that when you say AI,

(23:22):
are just talking about machines, thinking as if they were people.
That's one version of what AI could be. It's generally
referred to as strong AI. But AI covers a lot
of ground. It is a multidisciplinary technology, and it encompasses
relatively constrained concepts like computer vision or language recognition, and

(23:47):
then it ranges all the way up to big ideas
like strong or general AI capable of processing information in
a way that it is at least human like. Well,
one of the elements, in fact, one that's closest to
strong AI, that's in the thought experiment world, is the
thought experiment of an artificial brain or artificial mind. What

(24:09):
would it take to produce an artificial brain? So something
that we have created that is capable of some form
of thought, something that we would recognize as thought. So
there's a question about whether or not it's even possible
to create an actual artificial brain and what that could entail.

(24:30):
Some argue that what it will take is just a
sufficiently complex computer system that's emulating how our brains work,
so like an artificial neural network. If we were able
to build an artificial neural network that was big enough
and fast enough on powerful enough computer systems, then potentially

(24:51):
we would see the formation of an artificial brain. That's
how that argument goes. And maybe we wouldn't even need
to do an actual artificial neural network. Maybe the collective
interconnections of the Internet could allow an intelligence to emerge,

(25:13):
you know, maybe it would even be transitory in nature.
Maybe it would be an intelligence that emerges and fades away,
and maybe so quickly that we can't even ever recognize it,
that it's elements of an intelligence that because it's so transitional,
we don't recognize it as such. And maybe it is

(25:36):
possible to create a brain or mind out of such
complex connections between high end computer systems. But the truth
of the matter is we don't have a full understanding
of how our minds work, the actual gray matter that's
in our heads. We don't have a full understanding of that. So,
because we don't fully understand how our brains work, there

(25:57):
are some who argue that you know that it's possible
there's some element in our minds that we have yet
to identify. They will be necessary for us to understand
if we are to ever realize a true artificial brain
that without this unknown but perhaps fundamental component, it just
won't happen. Maybe we would fall upon it by accident,

(26:21):
or maybe we'll hit a limit that we just can't
get around without first having a deeper understanding of how
our own brains work. Alternatively, it might be possible to
create an artificial brain or mind without attempting to simulate
or replicate how human minds work. Proponents of this argument
I pointed out that for much simpler tasks relatively speaking,

(26:46):
like human flight, we ultimately abandoned technologies that we're attempting
to replicate how birds fly. I'm sure you've seen old
film footage of experiments in heavier than air flight where
people had strapped wings to their arms and they were
flapping them up and down, or they had some mechanical
contraption that was moving wings up and down and it

(27:09):
was all an attempt to replicate how birds fly in
the air, but these didn't really work, and ultimately we
found that going with a fixed wing aircraft design and
abandoning our foolish attempts to replicate what birds are doing,
we could actually succeed. We ended up creating successful flying
machines even though we were not directly mimicking birds and nature.

(27:34):
So by that argument, you could say, well, maybe creating
an artificial brain won't involve mimicking our own neurological systems
at all. Maybe it'll be through some other means, such
as those complex connections on the Internet, for example, where
intelligence would be an emergent property. So that is another

(27:56):
another approach toward looking at an artificial brain. As for
what I believe, I think that with enough complexity and power,
maybe we could see something like an artificial brain emerge,
But I honestly don't know. I do think that the brain,
the mind is completely engulfed and encompassed by the gray

(28:21):
matter in our heads. I don't think there's anything metaphysical
that's going on there. That's my own personal belief, but
I don't know that for sure. It's just my belief
partially backed up by the fact that people who have
encountered some form of brain injury often have very different
experiences from that point forward. And to me, that means

(28:42):
that consciousness and experience are very tightly locked with the
actual organ of the brain. But that doesn't mean that,
you know, there's not something else going on that I'm
missing that would be necessary. I just don't know. All Right,
we're going to take another break. When we come back.
We've got a few more thought experiments we need to

(29:03):
talk about, including some golden oldies. Okay, let's talk about
the Turing test, because you could argue that this is
kind of a thought experiment. So Alan Turing based this
off a game called the imitation game, and the idea

(29:26):
behind this is that you have a contestant who gets
to communicate with someone without being able to see or
hear this person. So maybe they're typing things out on
a typewriter. They submit it and then they get a
typed response, and their goal is to try and figure
out with whom they are communicating. So one version of

(29:48):
the imitation game has them talking to someone who could
be a man or could be a woman, and if
it is a woman, the woman is posing as if
she were a man. So it's the contestants job to
sess out whether or not the person on the other
end of the communication chain is actually a man or
a woman pretending to be a man. So let's ignore

(30:10):
the dated concept of binary approach to gender that you know,
obviously that's definitely changed since then. Touring was suggesting that
you could play this same sort of game, but instead
of having a woman pose as a man, you could
have a machine posing as a human. The contestant would

(30:31):
have to decide whether or not they were interviewing a
person or a machine pretending to be a person. And
if the machines were reliably able to fool contestants into
thinking that they are chatting with another human being, then
the machine would be said to pass the Turing test. Now,
Turing wasn't actually saying that such a machine, essentially a chatbot,

(30:52):
is intelligent or was capable of thought. Instead, he was
saying the machine could simulate intelligence to a degree that
a person might not be able to tell the difference.
And after all, each individual doesn't know for sure that
the people they encounter possess intelligence. If you met me
and we had a conversation you wouldn't be sure that

(31:12):
I am actually intelligent. You would know you're intelligent because
you know your own experience, right, You've had your experiences,
You know you possess intelligence. When you talk to someone else,
you assume they also possess intelligence. But you can't know
for sure because you cannot occupy their experience. But we
grant the assumption that the people we encounter have intelligence.

(31:36):
Turing was kind of cheekily suggesting that perhaps we should
extend the same courtesy to machines that appear to possess
the same qualities. Whether or not the machine is actually
intelligent or is capable of thought is kind of moot.
If the outcome seems to mimic intelligence, why shouldn't we
just go ahead and say the machine is intelligent. Does
it really matter if the machine can actually think or not? Now,

(32:00):
philosopher John Searle said, heck, yeah, it matters if we
say computers think and they don't. And in fact, he
went so far as to say computers are not capable
of having a mind to make up because ultimately they
are just machines designed to follow instructions. Thus, you know
a machine that follows a program, the program could be

(32:24):
incredibly sophisticated and complicated, but it's still ultimately just a
list of instructions that the computer has to follow. The
computer can't divert away from those instructions. It might appear to,
but it can't go off book, you know, it can't
go off the script and start to improvise. And at

(32:44):
no point does this become something as human as a
mind is. So to illustrate his perspective, he proposed a
thought experiment called the Chinese Room. Now I've talked about
the Chinese Room and a lot of other episodes, so
I'll try to keep this kind of short. Searle argues
that a computer running a program is a bit like

(33:06):
taking a non Chinese speaking person, someone who does not
understand Chinese. They can't speak it or read it. You
put this person into a room that just has a
door with like a mail slot in it, and inside
the room is a desk, there's paper, there's a pen,
there's plenty of ink, and there's a giant book of instructions.
So once in a while, someone shoves a piece of

(33:28):
paper through the little mail slot in the door, and
the piece of paper has a Chinese symbol written on it.
The person in the room has a job. They take
that piece of paper with a Chinese symbol written on it.
They go through their big book of instructions looking for
that symbol, and they ultimately will find it, and then
they will produce a response based on what's in the book.

(33:49):
They'll have to draw a different Chinese symbol. They just
ape the instruction that's in the book. Then they put
that through the mail slot in the door and they're done.
On the other side of the door, you have someone
who has brought a question to the room. You know,
it's written in that Chinese symbol. So they submit a
question and then after a bit of time, they get

(34:11):
an answer, and to them it appears that whomever is
behind the door understands Chinese symbols and can respond in kind.
But the fact is the person in the room doesn't
understand Chinese. They're just following very thorough instructions. But they
have no idea what's being asked or even what the

(34:32):
response means. They don't know what they're saying. They're just
copying what's in the book. They're following the program. They're
matching questions with answers in a language they don't understand.
They don't even necessarily know that they're questions. They're just
submitting whatever the corresponding response should be. So Searle says

(34:52):
that machines are essentially doing this, that's what they're doing.
They're producing responses based on input, but they have no
unders standing of either the input or the response. They're
just following instructions. When you engage in a conversation with
chat GPT, chat GPT doesn't actually understand what you're talking about.
It doesn't comprehend the questions, It doesn't understand context or

(35:17):
anything like that. It just builds up responses based on
a really sophisticated program. But these responses, even if they
correctly answer your question, do not show that chat GPT
actually understands what is going on. It's just producing a result.
Searle says this is because the machine ultimately cannot think,

(35:38):
It cannot be said to have a mind, and he
further argues that strong AI is a dead end. We're
never going to get there. It is it's inherently impossible,
and there's actually a lot of discussion and debate around
the Chinese Room thought experiment. There are people on different
sides of the matter, arguing for or against its merits,

(35:59):
and the repretation of it. But again, this gets into
a lot of details that we don't really have time
to dive into for this episode. And I have done
episodes on the Chinese Room thought experiment in the past,
so let's move on for a different approach to AI.
Let us turn to Valentino Brightenberg, who was a neuroscientist

(36:20):
and an important figure in the field of cybernetics. And
I feel like I need to define cybernetics because I
had a complete misunderstanding of what that term meant until
I was doing research. Cybernetics is a discipline concerned with
communications and automatic control systems in both machines and living things,

(36:41):
as defined by Oxford Languages. The word has its origins
in the Greek word for kyberneticus, which means good at steering.
I didn't know that before I researched this episode. So
you could describe the act of a human picking up
a teacup from a saucer on the table as a

(37:02):
cybernetics series of actions. And you would first think of
like the human brain as a controller and it receives
information from a sensor the human's eyes, and this gives
information about the teacup, where the teacup is located, its
distance from the human in question, the teacup's orientation with

(37:24):
reference to the humans position, etc. This information is called feedback,
So the feedback goes to the controller, and then the
controller uses the feedback to make a decision in order
to achieve a desired outcome, in this case picking up
the teacup. But this actually happens in stages. Right. You
might as an outward observer, you might see this human

(37:48):
lean forward and reach out their arm and then open
their hand, and then take the teacup and then lift it.
So this is actually a series of steps. The goal
for the controller is to take the behavior that we're
observing as outsiders, they leaning forward in the reaching of
the hand and so forth, and to bring that into

(38:11):
alignment with the desired behavior of just picking up the teacup.
This discipline plays an important part not just in our
understanding of organisms in their behavior, but also how you
could create things like artificial limbs that interface with our
brains and have those artificial limbs behave similarly to an
organic limb. We have seen some really incredibly sophisticated robotic

(38:35):
limbs that can do this sort of thing, but they
have to really be grounded in this study to move
in a way that's natural and actually achieves whatever the
outcome is that the person who is attached to that
limb wants it to do. This is not something that
just automatically happens. You have to build it in. So

(38:56):
in the mid nineteen eighties, Brightenberg published a book called Vehicle.
In this book, he presented hypothetical self operating machines. So
these were not actual machines, they were just sort of
a thought experiment. He said, what imagine if you had
a machine that did this, and they would exhibit behaviors
that could become increasingly intricate and complicated and dynamic, But

(39:20):
ultimately you could start to boil down those behaviors as
following simpler rules. And if you just understood all the
different rules in all the different situations, you would be
able to even predict what something would do to some degree. So,
for example, a machine might have an optical sensor and
it can detect if something is in front of the machine.

(39:42):
So imagine you've mounted the sensor to the front of
a little four wheeled vehicle, and if the sensor doesn't
detect anything in front of it, it allows power to
go to the motor that drives those wheels, and the
little robotic car will move forward. But if something gets
in its way, then maybe it cuts power to the

(40:04):
motors and the wheels stop turning, or maybe it changes
the rate at which different wheels turn so that it
can rotate a bit, it can turn out the way
of whatever the obstacle is. I'm sure you've had experience
with little toys that do this sort of thing where
there's some sort of simple optical sensor so that if
it gets close to a wall, it stops and turns

(40:27):
and moves in a different direction. Heck, your typical robot
vacuum cleaner will do this, right, So this is something
that we've had some experience with at this point. And
Brightenberg actually went on further and hypothesize that you could
have machines that would follow slightly more complicated rules in
such a way that it could imply motivations behind movements,

(40:51):
things that we would normally associate with humans, like fear
or aggression, but really it could just be the machine
responding to different situations in a predetermined way. So let
me give you a simple example. Maybe you've got this
optical sensor that's on the front of this little four
wheeled vehicle, and when it detects something, it tries to
determine whether or not the thing ahead of it is

(41:13):
bigger or smaller than it is. If it's smaller, maybe
it accelerates toward it, as if to intimidate it. And
if it's larger, maybe it turns and accelerates away from
the object, as if it's in fear. Now, Brightenberg's hypothetical
vehicles didn't really need any cognitive processes. They would just
follow these simple instructions. But if you were to put

(41:34):
them in a complex enough environment with enough different sets
of instructions, these behaviors would potentially be very dynamic and complicated,
perhaps complicated enough to imply a deeper intelligence, even though
ultimately they were just following simple rules. Speaking of vehicles,
let's talk about the trolley problem. And you've likely heard

(41:56):
of this one. It's one of the more famous thought
experiments that relates to ethic. The basic version is that
there's a trolley hurtling down some tracks, and the breaks
on the trolley aren't working, and if the trolley keeps going,
it will hit a group of five people, killing all
of them. But you're standing at a switch. If you
throw the switch, the trolley will divert onto a separate

(42:16):
set of rails and strike one person, killing that one,
but the other five people will be saved. So do
you throw the switch dooming one person and saving five?
There's actually a lot of stuff to consider here. For example,
do you consider it more ethical to make an active choice?
Is it akin to murdering someone? If you throw the switch?

(42:39):
Are you killing that person? Like you're condemning them to die?
But if you choose not to act, does that exonerate
you from the death of the five people? You could say, well,
they'd be dead if I weren't at the switch, there'd
be no one to change it. They would have died
either way, So it's the only thing that's different is
that I happened to be at the which does that

(43:00):
make me a bad person for not throwing the switch?
There are variations of this as well that make it
even more complicated. For example, one early version had the
person at the switch having to choose between saving the
five people or condemning their own child. To death. Other
versions replaced the trolley. What if it's an incoming missile

(43:21):
and you have the ability to divert a missile that
was heading toward a city, But if you divert it,
that missile is going to hit a small town instead.
So if the missile hit the city, more people would die,
but there could still be some survivors in the city.
If it hits the town, it's going to essentially wipe
out the entire town population. Fewer people overall will die

(43:42):
because the town is smaller than the city, but essentially
everyone in the town dies in the city. It's a
massive but not entire part of the population. Well, what
does all this have to do with technology? These are
actually the sort of questions that engineers have to wrestle
with as a design stuff like aonamous systems. When we
look at the possibility of driverless vehicles, for example, we

(44:04):
have to consider how the vehicle will handle emergency situations.
So let's say a driverless car with passengers inside it
is motoring down the road and a person steps out
in the road suddenly, so it's too late for the
vehicle to break. Let's say let's say does the driverless
vehicle instead via ale the way, perhaps even going off road,

(44:26):
factoring in the fact that the passengers inside the car
have seat belts and have air bags and other protective
measures around them, and thus prioritize the pedestrians health, or
does it instead prioritize the safety of the passengers and
make a decision that it puts the pedestrians safety at
considerable risk. Machines do not intrinsically know this stuff. So

(44:50):
grim as it may seem, these are things that engineers
have to take into consideration as they build out complex
autonomous systems. Now, let me just finish up by touching
on a classic science fiction thought experiment. Are we living
in a simulation? You've seen this idea explored in movies
like the Matrix series, And there is an interesting thought

(45:12):
experiment proposed by Nick Bostrom regarding simulated realities, and he
posits that at least one of several possibilities must be true,
namely scenario one. Humans are never going to reach a
point in which they can construct a simulated reality sophisticate
enough for the inhabitants of that reality to believe they
are quote unquote real. So, in other words, for whatever reason,

(45:35):
maybe we destroy ourselves before we get there. Maybe we
just never develop technology sufficient enough to do it. But
we aren't able to create a computer simulation so robust
that the simulated beings inside of it have their own
kind of self awareness. Two that there are no other
civilizations out in the universe that are able to do
this for whatever reason. Three that we humans will one

(45:58):
day be able to do this, but we can't do
it yet. However, we will one day be able to
do it, we just haven't reached that point, and we're
the first to do it, like no one else has
managed to do it. Four that we're actually living in
a simulation. That the idea being that if it is
possible to build such a simulation where the beings inside

(46:19):
the simulation have self awareness and can think and have
emotions and all this sort of stuff that we associate
with being humans and having experiences, then if that is
possible to build such a thing, we are definitely in one,
or at least there's a fifty fifty shop we are,
because if it is possible, it would be pretty egotistical

(46:42):
to suggest that we'd be the first to do it.
That hasn't happened already, And that we are not in
fact a product of such a thing. There are a
lot of arguments that go into that. It's kind of fun.
I would argue, ultimately it's moot because it's not a
falsifiable hypothesis. And ultimately we still have our own experiences

(47:03):
in our own lives. So even if it is a simulation,
it matters to us as we're in it, like just
as much as it would matter if it's not a simulation.
So I say, simulation or not. Go out there, be
good people, use critical thinking, use compassion, and use these
thought experiments to kind of guide you a little bit
and kind of suss out what's right and what's wrong

(47:25):
and what are some possible solutions to these problems. Like
I said, this is just a small collection. They're ton more.
I'll probably do more episodes in the future about different ones.
There's a whole bunch about quantum mechanics. But boy, howdy
do those get heavy. So maybe we'll take another look
at these in a future episode. For now, I hope
you're all well, and I will talk to you again

(47:47):
really soon. Text Stuff is an iHeartRadio production. For more
podcasts from iHeartRadio, visit the Heart Radio app, Apple podcasts,
or wherever you listen to your favorite shows.

TechStuff News

Advertise With Us

Follow Us On

Hosts And Creators

Oz Woloshyn

Oz Woloshyn

Karah Preiss

Karah Preiss

Show Links

AboutStoreRSS

Popular Podcasts

Therapy Gecko

Therapy Gecko

An unlicensed lizard psychologist travels the universe talking to strangers about absolutely nothing. TO CALL THE GECKO: follow me on https://www.twitch.tv/lyleforever to get a notification for when I am taking calls. I am usually live Mondays, Wednesdays, and Fridays but lately a lot of other times too. I am a gecko.

The Joe Rogan Experience

The Joe Rogan Experience

The official podcast of comedian Joe Rogan.

Music, radio and podcasts, all free. Listen online or download the iHeart App.

Connect

© 2025 iHeartMedia, Inc.