All Episodes

October 7, 2016 38 mins

Joe and Lauren share their favorite predictions about the future while Jonathan makes lots of jokes. What predictions rank among the team's favorites?

Learn more about your ad-choices at https://www.iheartpodcastnetwork.com

See omnystudio.com/listener for privacy information.

Mark as Played
Transcript

Episode Transcript

Available transcripts are automatically generated. Complete accuracy is not guaranteed.
Speaker 1 (00:00):
Brought to you by Toyota. Let's go places. Welcome to
Forward Thinking. Hey, then, welcome to Forward Thinking, the podcast
that looks the future. It says tomorrow Tomorrow, I love
you Tomorrow. I'm Jonathan Strickland, I'm Lauren, and I'm Joe McCormick.

(00:24):
And this is gonna be part two of a two
part episode that we had to split up because we
ended up talking for a long time. But last time,
what were we talking about? It? It was about our
favorite predictions of the future. Yeah, we talked about a
lot of very quaint French postcards from the turn of
the nineteenth century, been with bad wings, and we talked

(00:46):
about some driverless car technology, and then we talked about
how communications doesn't make you less of a jerk? Is
that what we talked about? Yeah, Well, we talked about
like telecommunications predictions that the idea of the the rhetoric
of the electrical sublime, the people who thought that, you know,
the telegraph and eventually the internet would just bring us
all together and make us connect and just be friendly.

(01:09):
We would friendly happy people. Only we can talk to
each other instantaneously. We'll bring about world peace and then
we'll all just click on that little button to buy
the world a coke. Yeah, and well, you guys aren't
alive in the seventies, so you have no idea what
I'm talking about. But we all know what by the
World of Coke is. Did you ever see those commercials?
Were they still around when you guys were kids? Also,
I don't know what you're talking you'd like to I'd

(01:31):
like to sing. I'd like to teach the world to
sing a perfect harmony. I'd like to buy the world
of coke and keep it company. I know the polar Bears,
the polar Bears, you know, cute little ones. Joe, just
talk about robots. Okay, So last time we talked about
one of the we reached picking picking a couple of
favorite predictions in the future, or at least the most illuminating.

(01:52):
I don't know if I could say I have favorites,
the ones that are spurring of good conversation, and so
we did. Why list telecommunications and all that? Now I
want to talk about robots. Uh So I was trying
to think what my other favorite prediction would be. And
I think that we we've talked about this on the
show before, but I have to come back to it

(02:13):
because I think it's so fruitful. Uh. And it's the
science fiction world embodied in Isaac Asimov's Robots stories and
essentially in Asimov's fictional future. Robots are very intelligent, powerful,
well integrated into society, performing all kinds of labor the

(02:34):
positronic brain, and to keep their behavior in check, they
all necessarily are bound by three fundamental laws of robotics. Right,
that's kind of like the I mean, we've seen tons
of different science fiction stories that build upon the same idea,
like even things like RoboCop where he has all the
different directives that stems from this concept of the laws

(02:57):
of robotics. Yeah, and so the three law us basically
are a robot. First law, a robot may not injure
or harm a human being, or through inaction, allow a
human being to come to harm. Obviously, robotup did not
have that one. Nope. The second one, a robot must
obey the orders given it by human beings, except where

(03:18):
such orders would conflict with the first law. So lo
as you couldn't tell robot, hey, I want you to
go and punch gim in the face, right, robot has
to do what you say unless you tell it to
hurt somebody. And then the third law, a robot must
protect its own existence as long as such protection does
not conflict with the first or second laws. And then
there was the zero with law that got added afterwards,

(03:41):
which is a robot may not harm humanity or through
an action, allow humanity to come to him, right, the
sort of the worldwide extension. And they also would sometimes
extend it's not just to humans, but to property as well,
so in other words like like you know, extended, saying
that a robot couldn't cause damage to property unless again

(04:02):
it violated a more important law like further up on
the list. Yeah, and so if you haven't read any
of these stories, I do recommend going back and reading
some of Asimov's robots stories because I think they're very entertaining,
their interesting, and they're usually pretty short and self contained.
Um but anyway, the dramatic conflict in many of these
stories comes from engineers and robopsychologists trying to solve problems

(04:28):
with robots, and the problems are created by the fact
that the robots are following the laws, but following them
in a way that leads to unforeseen problems. Uh So,
robots with these types of programming, can be caught in
sort of ethical loops or traps that prevent them from
doing something crucial or that allow them to do horrible

(04:50):
things by way of misunderstanding or misapplication of the laws.
So it's kind of similar, and I think I've made
this comparison in a previous episode of forward thinking. But
for for those of you out there who have ever
played Dungeons and Dragons, if you've ever encountered a scenario
where your character is granted a wish, you know that
any any experienced player will spend ages agonizing over the

(05:14):
exact wording of the wish, because dungeon masters the world
over have great joy in taking a wish and then
purposefully miss interpreting the wish so that a terrible thing happens.
So the classic one is make me a sandwich and
then boom, the character turns into a sandwich. Like that
would be That would be a very simple version of that.

(05:36):
But you typically will get someone who has to try
and word out an incredibly convoluted wish in order to
avoid any potential misinterpretation or or abuse of their request.
The same sort of thing is is kind of underlying
some of these stories where it's not that a robot
is intentionally trying to find its way around its programming.

(05:56):
It's simply that in order for it to carry out
whatever task it's been given and follow these rules, something
doesn't go as people would expect it to. It's really
a cautionary tale about human error and hubris. Yes, definitely,
and also just the idea of you know, any sort
of artificial intelligence that's sufficiently capable of being able to

(06:20):
to be autonomous or even semi autonomous, it's hard. Well, yeah,
that's exactly why I cited this. I cite this because
of the increasing interest in the future challenges presented by
AI and sophisticated mobile robotics. Um, but you're exactly right.
You know, the more a machine is in line with

(06:41):
what we think of as intelligence in the same way
we've use intelligence to refer to human intelligence, the more
their behavior and reasoning will become proportionally obscure to us.
Like things that act in intelligent ways are sometimes hard
to understand because intelligens is inherently complex. I've got a

(07:02):
very relevant example that happened very recently, which was that
you had some people working with a supercomputer over the
course of forty eight hours to create a mathematical proof
that had been proposed back in the eighties, or at
least someone has said, like, I need a proof to
prove this whether or not this particular mathematical problem as

(07:24):
possible or not possible. And the computer did it in
two days, and it took two hundred terabytes of storage
to store the proof. And I won't get into the
proof that would take forever for me to explain, but
the idea being that, uh, it was such a long
laborious process, even for a supercomputer working for forty eight hours,
essentially that for people it was pretty much a lost cause, right,

(07:49):
like like you almost have to uh use a different
supercomputer to verify the results because you couldn't employ humans
to go through all the steps. It would just take
way too long and it and it'd be too easy
to lose your place and make a human error and
then realize that, oh, well, we've got to have yet
another team check that the results here. And you started

(08:11):
getting to this idea what this was just to do
one mathematical proof and there's not a major downside to
humankind if it does something wrong, right, if if it
says that the proof shows that this particular mathematical question
is impossible, and it turns out it is possible. Big whoop.
But you might have similarly complex internal machinations in a

(08:35):
robot that is designed to open a door for people.
But the robot that's designed to open a door for
people could slam the door on people and kill them.
So you know, even well, maybe a robot that opens
doors for people wouldn't have to be all that intelligence.
I'll give the full general AI to the door opener.
You say it, But actually when I went to south

(08:58):
By Southwest, one of the panels I too. They talked
about how there was, uh this team that was trying
to develop a robot that could open different types of doors,
but it would take hours for the robot to figure
out what kind of door it was, and how three
types of doors. There's your problem, right right, I mean,
it's only got the one door. If you only got
one door, then you're fine. But but as we saw
also in the darker robotics challenge like opening doors and

(09:20):
walking through them, certainly the walking through them part, right,
So you know, you've got to find a way to
prevent robots from causing harm, and this causes we've got
to think about the AI control problem, and it's not
as easy as it sounds. And I think this is
very well predicted by Asimov's stories, which makes this one
of my favorite future predictions, because he starts with these

(09:42):
three laws that if you read them, they sound very
simple and they sound very air tight. It sounds like
they cover all the bases. But you you only have
to think a little bit further as Asimov did, and
start trying to apply them to unexpected malfunctions and use
case scenarios. Well, and it could you it also just
imagine where it's not even a malfunction. You could just

(10:03):
imagine where a robot would take no action, because it
turns out the robot is able to project all the
potential consequences of its actions and determined that some of
them may in fact cause some form of harm to someone,
and that it's not even that necessarily identified, right. It
may just be like, well, I took into account the

(10:24):
task you gave me, I plotted out all the variables,
and it turns out there's maybe a twenty seven percent
chance that this could hurt somebody's feelings. And because I
interpret that as being harm. I can't actually do what
you asked me to do, so I'm just gonna stand
here motionless, and you're gonna think I'm broken. The chaos
effective robotics like you've created a very expensive brick, and
that brick is just constantly thinking like well if I

(10:45):
opened this door, then yeah, it's it's a robot paralyzed
by self doubt. Essentially, because we so we're creating a
Douglas Adams character, would you would have to create a
threshold for the robot saying you have to be above
this percentage sure that this action is going to cause
some form of harm before you decide not to do

(11:05):
it right now. Now. Of course, the application of this
to the real world is all well, actually I was
going to say it's all assuming we ever create general
AI of any consequence, but actually I think it's not
even even if we don't. As a MOOV stories are
interesting for thinking about ethics as they apply to humans. Well,
it's also too, I would argue, it's interesting to apply

(11:27):
that even if you're talking about narrow definitions of AI,
there's still something to be said about it may not
be sentient, it may not be self aware, it may
not have general AI may not it may not be
walking around and interacting with us, but it could very
well still affect us in a in a pro in
profound ways and so thinking in this case, even for

(11:49):
something like narrow AI, I think it's important. It is
also important when it comes to just our thoughts about
how humans behave I agree, but it's it's it's something
that a lot of people have been arguing we should
be talking about more anyway, with the state of AI
as it stands right now, which I think everyone would

(12:10):
say it falls easily into the category of week AI. Yeah. Well,
and we talked about autonomous cars in the last episode.
You start entering when you've gotten multi ton death machines
with combustible engines in them running around being controlled by
week AI. Uh, you have you have a potential future
rife with Charlie problems. Yeah. Yeah, that's a very good point. Yeah,

(12:31):
where you get to that situation where, uh, an action
must be taken and there is no clear action that
will prevent harm from some happening to someone, right, whether
it's yeah, which which someone you choose to cause harm? Right?
Would it be the person sitting in the past, your seat,
would it be a person outside of the vehicle, would
be a person in another vehicle? Yeah, that whole thing exactly. Yeah,

(12:53):
I mean, and it also comes down to things that
are in conflict. In some of those stories were like
you have a very very low risk of a major
problem in conflict with a compelling need for something petty
as opposed you know, So like in a driverless car,
this might mean, Okay, should the car come to a

(13:15):
stop every time there is a point zero zero zero
zero one percent chance that doing so would prevent injury
to someone? I don't know. I mean, what if that
means it's coming to a stop all the time and
being really inconvenient. So in other words, like let's say
that it's um let's say that it's a car that
has predictive algorithms that can track motion of various entities

(13:38):
around the vehicle. Right, so pedestrians for example, knowing that
sometimes pedestrians will walk into the street not at a crosswalk,
so it's not a place where they're designated to cross. Well,
autonos cars will have to be able to deal with that,
have to be able to detect that a person has
stepped out into the street and stop. At what point
do you tell the car this is indica an indicator

(14:02):
that a person is going to walk out on the street,
as opposed to this is someone who's trying to step
around some dog do that's on the sidewalk, and they're
not actually gonna walk in the street, but their path
has diverted enough so that if they were to continue
on it, they would go out on the street. But
that's not what they're playing on doing. If the car
thinks you're gonna walk out on the street, then it's
gonna stop. So in that case, a really really sensitive

(14:22):
car might kill fewer people in the total, but it
will take you forever to get to You might be
stopping all the time. Another psychological science fiction example, there's
there's a movie called Robot and Frank that I think
I've brought up on the podcast before Frank Angel and
it's got somebody. I think you're right, Yeah, human of

(14:44):
some kind or another. I'm pretty sure you're right. I've
seen it on that. I've seen that it's on Netflix.
I have not watched it. It's it's cute, um, it's
it's it's it's a fun it's a fun film. It's
it's comedic and and and traumatic at the same time.
And and the basic storyline is is that this this
this elderly man who is a reformed criminal UM is
being placed in the care of a of a home

(15:06):
care robot and uh, and the robot can't keep this
dude happy. It's directive is to keep the dude happy?
Should the robot take him on heists hijinkson soon? So
you know, like like how like like at what point,
at what point does morality kick in? Yeah, how do
you prograb a robot to like not take dudes on heists?

(15:30):
Assuming that it shouldn't get similar to maybe it should
right starts starts also falling into the plot for Chappie
as well, You've got terrible. It's not great, but but
it falls into that that same sort of plot. The
idea of having a robot that is generally meant to
be kind of a positive influence and then bad people

(15:55):
or at least people with bad intentions get hold of
the robot and UM and and twist its sense of
morality because it doesn't have something immately programmed into it
so that like, hey, we we need to survive, so
in order for us to survive. We need to be
able to do these things which are technically crimes, and
then convince the robot to do those things. Um, Chappie
is is kind of a lot of people have compared

(16:18):
it to short circuit, the short circuit movies. I think
that's fair. It's not. It's not complimentary, but it's fair.
All right, Well, that about does it for me, Lauren.
Do you have some of you want to talk about? Yeah?
I also wanted to continue talking about speculative fiction because
that is where I tend to live, and in thinking

(16:41):
about this, I was finding it interesting that that a
lot of science fiction and speculative fiction, however you want
to say, it centers around those cautionary tales about technology
that we were talking a little bit about previously. Um,
you know the kind of plot line of like mankind
has created this amazing thing and it destroys him, possibly

(17:03):
from the inside out, don't we I'm sure? Yeah? Well,
I mean it's it's fun storytelling. But but take take
Ray Bradbury's work, for for example, UM, excellent work. UM.
Ray Bradbury kind of hated technology. His work frequently disdained
television and telecommunications in general. In his stories, those kind

(17:24):
of technologies made people placid and unthinking and isolated and
even cruel. And it's not just in stories. He kind
of held the same feelings in real life. There's a
great quote from him from an interview with The New
York Times in two thousand nine where he was talking
to them about the possibility of his work being put
into e book format and he said, it's it's great

(17:48):
all around. He said. Yahoo called me eight weeks ago.
They wanted to put a book of mine on Yahoo.
You know what, I told them, to hell with you,
to hell with you, and to hell with the Internet's
father science fiction, y'all, or like at least at least
one of the crachity uncles of this is a guy

(18:08):
who he didn't drive, he didn't fly on airplanes, um
and okay, like to be fair, he was born in
nineteen twenty and not everyone finds it easy or even
desirable to adapt to new technology. And that is fine.
It is not up to everyone to adapt to everything
that goes on in the world. Um. But but this
was this was a guy who envisioned like in air

(18:30):
communication headsets and also a t M S Yeah yeah,
like earbuds and also self driving cars and also like
wall sized flat screen TVs and virtual reality. And he
did all of this in the nineteen fifties. Yeah, to
be fair, though he thought it was going to be terrible.
The future is going to have all this stuff. It's
gonna be awful. Yeah. In another interview, um, he said,

(18:55):
I don't try to describe the future. I try to
prevent it. Well, I mean and end time. Now in
some of the books, and some of the books you
can see what he's saying, Like like in some of
his books, you understand exactly what the point is that
he's getting at, like like fair Knefe or fifty one.
You know, you look, you read that book and you realize, like,
this is a book that's warning us about anti intellectualism.

(19:16):
It's warning us about turning to things like television as
a source for information and entertainment and issuing the idea
of books which apparently magically make you think while television
magically makes you not think. I think that's oversimplification, but
you see the route he was going. Yeah, Well, and

(19:37):
we and we still have the concept kicking around today
that digital communication makes you a colder and more more
isolated than person to person communication does, although that's a
little bit that's that's a little bit prejudiced in that
it's not taking into account people who are bad at
in person communication, who have, you know, whatever kind of
anxiety or or inability to get out of the house

(19:59):
or etcetera, etcetera. And so it also ignores the trend
of urbanization where you get to a point where, uh,
you weren't growing up in a small a lot most
people these days, I should say most people are not
growing up in a small community where everyone knows everyone else.
In communication is pretty easy because you already know everybody,

(20:19):
and I mean that's you can't really get around it. Uh.
And then you get into an urban environment where you
may have potentially millions of people around you. There's no
way for you to know everybody, and so the people
you know, the people with whom you form relationships may
not be people you encounter all that frequently, unless you're
just hanging out with the people you work with or whatever. Uh.

(20:40):
So technology has allowed them to maintain these relationships in
a way that they couldn't necessarily do without it. There
was actually a comic strip I wish I could remember
who it was that drew this, I just one of
those things that you pop that pops up on a
Facebook timeline and I saw, but it was commentary on
that very idea. It shows for people on like a

(21:03):
subway train and they all have their phones out, and
then there's like the one the one version of the
thing saying people today, you know, they want to be
on their phones. They don't take the time to bother,
to communicate with each other, it's terrible. And then there's
a flip side of it where it shows what each
person is supposedly typing on their phone, and it's all
things like like I'm gonna be home soon, I miss you.

(21:23):
Like it's actual messages of love and support to people
that are meaningful to them. Uh, it's not that they
don't have any care for the strangers around them, but
they are communicating. They're just communicating to people who aren't
in that space at that time. And say, it's kind
of judgmental to make that broad statement. You could argue
that there are trends that perhaps are changing some of

(21:45):
our cultural values, but to make a like a flat
statement saying, you know, people care less now because they
aren't talking to each other in public, I think that's
overly simplifying the matter personally. Joe Joe thinks so too.
I'm thinking, I mean, I get it. Where like the

(22:09):
idea that you are, you can there are people who
use technology as a shield from interacting with others. We
do it here at work. You put on headphones, and
that tells everybody, even if you're not listening to anything,
it tells everybody. Hey. The main move I'm thinking of
is like you're out somewhere and you see somebody you
don't really want to talk to, so you pretend to

(22:30):
be doing something in your phone. Yeah. Yeah. Usually for me,
that's when I'm walking home and I just see someone
walking the opposite way towards me on the sidewalk, and
I'm just like, I don't want to have an interaction
with this human being. What can I do? Time to
pretend to send a text? I hate saying hello, Hello
is fine? Hello is fine. Having to have a five

(22:51):
minute conversation on why I don't want to buy your
John Wayne blue rays uh that I don't want to
have to do again just because I was wearing a
cowboy hat. That, by the way, true anecdote. Yeah, I
got some blue rays of John Wayne and like, Okay,
He's like, do you want to buy a Like no,
he's they're collector's items. Like still no. Like I first

(23:14):
of all, I could go into a very long conversation
with you about why physical media no longer holds any
sway with me, but that's beside the point I'm trying
to get home anyway. Anyway, I wanted to bring all
this up to just as an illustrative point of how
UM biased our perceptions of the future can be based
on what our current culture and current technological state UM

(23:39):
and and to kind of springboard off of that. Some
of my favorite predictions for the future predictions in big
Old scare quotes UM because they're their imaginings, really, since
their speculative fiction are UM like like really like grimy,
cyberpunk esque visions like like in the comic book series
trans Metropolitan UM by Warren Ellis and the artist Derrick Robertson,
or The Snow Crash, which is by Neil Stephenson UM.

(24:03):
And maybe maybe because I feel like things like that
are are are closer time wise and and feel more
realistic to me than than futurists kind of sweeping visions
about what's going to be going on? Um In In
stories like these, commercialism and personal interests are driving technology forward.
This this translates to booms and in wearable and implantable

(24:26):
technology that that keep us constantly connected. Uh. The the
the information industry in these stories rules the world not
too far off from where we are right now. Genetics
affords the characters upgrades and strength and resilience. UM can
even allow for like X Men style voluntary mutations UM.
And the lines between humans and their technology are are blurred.

(24:48):
Not not quite a race, so not not quite to
a point where you would call it the singularity, so
certainly not uh and probably not even as far as
trans humanism um. But but but bumping up against that
that thresholds somewhere in the squishy meat space before transprisa
UM and and all of these these changes and developments

(25:09):
results in really amazing things. You can you can pop
an anti cancer pill and just never get cancer. You
can you can spend your free time in vast and
beautiful and enriching virtual world. You can learn about anything
and everything that that whole Uh, what is it the
the rhetoric of the electrical sublime, that that's that sort

(25:31):
of thing. Um, but in these stories it also leads
to terrible things. Technology viral code can threaten your life
and your sanity. Um. Viral code not viruses. Yes, so
you can get a virus, but this is a computer virus.
It's it's a brain virus. It's a brain computer virus.
M most bodily upgrades in these stories that are considered

(25:54):
tacky or vain at best, and are mostly portrayed as
just being kind of grotesque. Personal privacy is a complete joke.
The lines between the upper and lower classes are even
more stark than they are today. So you've got like
the haves have way more and the have nots have nothing.
Sure and and furthermore that the poor in these stories
are being told that a virtual life should be enough. Ah.

(26:17):
So this is like the Chimney sweet poem, Like you
guys are you guys? I realized that your your real
world life is crappy, but if you just stick with it,
you're gonna have such a wonderful, wonderful existence afterward. That
was essentially the message of the Chimney sweep. Uh it's
a poem from the nineteenth century. In that case, they

(26:39):
were it was essentially the It was specifically criticizing a
very particular approach that some Christian ministers were taking when
they were talking to the poor, saying, you should just
accept your lot in life because if you if you
do that and you're a good person, you will be
rewarded in heaven. And the poet in this case was

(26:59):
our doing. You're saying this in order to preserve the
status quoe. You're not actually saying it because you believe
these people's souls are destined to heaven. You're saying it
because it's convenient to keeping these people where they are
in their social class. If you believe they're destined for heaven,
is that destiny dependent on them doing a good job?
As being a chimney sweep? Right? So, and furthermore, is
is that really a happy ending? Like? Is that a

(27:21):
nice thing? Well? And and here's the problem is that,
I mean, I remember studying this poem in college and uh,
it was a freshman or sophomore poetry class, which meant
that uh, and it's technically the chimney sweeper. It's a
William Blake poem. And so it was one of those
things where several of the students weren't aware of the

(27:43):
satirical nature of the poem. They were taking it at
face value and that was problematic, whereas it sounds like
this is probably a more apparent satire. Oh yeah, oh yeah.
It's it's not saying that this stuff is rad it's
saying that this is a potential problem of the future.
It's not quite a caut nary tale of technology. It's
more like a cautionary tale of society, of of like,

(28:04):
where given these things society could go, especially considering that
that with these stark divides between the rich and the poor,
conflict and violence are still thriving in opposition to something
something along the lines of like Star Trek, where you know,
certainly there's conflict with with other other alien races, but
within humanity everything's pretty much sorted out. Everyone's got their

(28:26):
basic needs met. They they are allowed to pursue any
type of activity they want, whether it's something that would
be described as a job or just you know, you
just want to I just want to sit and think. Fine, fine,
go ahead, do not think also fine? And and there
are still, in those sweepingly optimistic stories like Star Trek,

(28:50):
a little little side tales, side quests. If you will
about about particular groups of people who, for whatever reason
are being oppressed or or miss read it in some
way because I don't know why, but because maybe maybe
because we can't currently imagine a future without people being jerks. Well,
and dramatic necessity, Well, I mean, but that's what I'm saying, like, like,

(29:13):
does dramatic necessity mean that we like, like that is
the fact that a story is boring to us if
bad things aren't happening to people, Like, what does that
say about us? That you can't just tell a story
where everything is pleasant, there's never any conflict, and then
everything just ends fine. That seems like it's not a story. Well,
that's what that's That's kind of what Lauren's point is
is that without that conflict, we don't consider it a story. Well,

(29:36):
does that mean that at the very basic core of
being a human we need bad people and bad things
to happen in order to define, you know, to understand it.
I was I was thinking about this when you were
talking about Marconi's point about things like wireless technology making
things like war ridiculous, and and I found myself wondering

(29:58):
whether any technological thing could ever solve war or or
conflict or poverty, because it it does seem like at
this point in society at least, distrust and greed for
better or for worse are are are part of the
human experience. They're part of the fabric of our makeup.
And and maybe it's just one of those things that

(30:19):
I cannot imagine being different from our current perspective. Um,
And I don't I don't mean this to be a
downer note. It's actually an optimistic downer note because maybe
some unpredictable technology will come along that will solve energy
or solve empathy and um and lead to a kind

(30:40):
of utopia. Yeah. I don't know if I can ever
see a full utopia. I just see things. Uh. My
optimistic vision for the future is one of progressive solving
of small problems. In my optimistic view of the future
is that we get to a point air we've already

(31:02):
we've already reached the point where we can talk to
almost anybody that's continuing. Obviously, not everyone has access to
the internet, but that is that that the number of
people who don't have access to the Internet decreases every year.
My optimistic vision of the future is that we arrive
at a day where we're listening to everyone. We're no
longer just talking to everyone, but we can actually listen

(31:23):
and at least be able to have that conversation, which
you could argue it gets right back to that simplistic
notion of all sit down and talk it out and
everything will be fine, right. I mean, I'm not trying
to criticize you for this, but if you just take
that literally, just listen to everyone, well, a lot of
the people you're going to be listening to are going
to have some really stupid and hateful things to say.

(31:45):
But then if you ultimately get at the core of
why they say those things, you could perhaps address the
root issues that are that are producing this in the
first place. Unless you just come to the conclusion that
some people are inherently bad, which I have I have
a I have a problem with that idea, but I mean,
there's some people who certainly behave as if they are

(32:06):
inherently bad and give very little indication that they are otherwise.
But assuming that you don't buy into that that philosophy,
then you could say, let's look at the series of
events or the various uh components of the scenario that
are in place that have led to this behavior and
find out are there things that need to be addressed

(32:28):
so that people don't develop these ideas or thoughts or prejudices,
because I think more often than not it comes from
a place of putting blame on others for a situation
that you are in, whether it was justified or or not.
So in other words, you might say I'm not as

(32:49):
successful as I should be because those people over there
begin are given preferential treatment, or I am I should
be guaranteed the place where I'm at. Don't give that
other group the same sort of opportunities that I've had,
because then you're somehow taking away where I'm at, Like
you have to get to the bottom of where does that?

(33:11):
I mean you're talking about particularly you're talking about like
group prejudice or something. But there's also I mean, there
are lots of ways to have nothing good to contribute.
I mean, you can also just be like a sociopath
who hates people. Well, sure, but that's that's always going
to be the case, right There always going to be
sociopath unless we come up with h is it yeah,
maybe not. I mean I mean, if we come up

(33:32):
with a way of identifying it immediately. And then but
then you have a question of how do you deal
with that? Do you do something where you're making a
fundamental change to someone's uh neural, uh, you know, performance,
so that they are not going to be a sociopath?
At what point does that go way too freaking far? Right?
These are we did. We did a whole other very

(33:54):
long episodes like The Moral Moral Bio Enhancements. Yeah, so,
I mean they're always going to be outliers. I don't
think you're ever going to get a thing where it's
going to be universal. But I I at least not
that I necessarily think we're going to get to a
point where we're all listening to one another. But I
think we should always be striving to that right, that

(34:15):
should be our goal. Even if we have all concluded
that that goal will we will never be there. To me,
it's still something that we have to strive for if
we want to continue to improve as just humans. Yeah,
I guess the way I would interpret that, maybe what
you're saying is making a good faith attempt to understand

(34:37):
everyone else's point of view. Yeah, that's that's that's fair,
fair assessment. Yes, yeah, yeah, if we could have a
technology that would allow people to do that, and I
think so easily that it would come as second nature.
I think it's one of those I think it's one
of those things where perhaps, uh, we just get to

(34:57):
a point where we're able to switch the focus is
from the technological capability to more of all, right, let's
really address the social and cultural issues that are in
place that are allowing such things to foster and really
just have conversations about that and really talk about what
are the what are the various causes of this um
and what can we do about it? Like, are there

(35:19):
things that we can are there problems that we can
actually work on solving? Are there some things that are
really it's so you know, nebulous that there's not really
a way to solve it, and if so, what else
could we do? But then, to me, that's not talking
about the technology anymore. That's that's talking about people being
people um And And you know, maybe the technology allows

(35:43):
for greater conversations to happen, but the technology itself doesn't
doesn't actually make the change again unless we get that
good evil switch in everybody's head and you just make
sure everyone switched to good. Yeah, no, I I don't know.
I think that that's the That's the point about all
of these discussions of prediction for the future is that
whatever future technology we can imagine, people, as far as

(36:05):
I can personally discern, are are still going to be people.
Um And and it technology can slowly change the way
that we act, in the way that we interact, But
I don't think it changes us like intrinsically, I don't
think it does either. What I think it would allow
us to do is just, uh, again, be more aware

(36:26):
of what is going on beyond our our own selves. Um.
Whether that means that we care, that's kind of up
to the individual. Right. That's technology is not going to
magically make someone care if a person around on the
other side of the world is suffering or not, even
if they see really compelling evidence that that person is
indeed suffering. Um. But it it certainly, it certainly makes

(36:51):
more people aware of it. And that's at least the
first step toward getting something done, because if people are unaware,
then of course there's no they're not going to move
to action. Yeah, they can't act because they didn't know, right, So,
whether the ignorance was self imposed or not, Um, anyway,
these are This is exactly why we wanted to do

(37:11):
these episodes, right, to have these kind of conversations, to
talk about these big predictions and big ideas. And uh,
there's so many more we could have touched on, Like
we limited ourselves to just a couple each because we
knew We didn't know it was going to go over
to two episodes, but we knew it was going to
be a heck of a conversation. But we've got so
much more to say. You'll have to tune in next

(37:32):
week to hear our our thoughts. Uh, we've got a
very special episode coming up for you guys, so you
should tune into that. And um, guys, it's been great.
If you want to check us out on Facebook or Twitter,
we are FW thinking. All were on Twitter. Search fw
thinking on Facebook. Our profile page will pop up and
we will talk to you again really soon on this topic.

(37:59):
In the future of technology, visit forward Thinking dot Com,
brought to you by Toyota. Let's go places

Fw:Thinking News

Advertise With Us

Follow Us On

Hosts And Creators

Jonathan Strickland

Jonathan Strickland

Joe McCormick

Joe McCormick

Lauren Vogelbaum

Lauren Vogelbaum

Show Links

RSSAbout

Popular Podcasts

2. In The Village

2. In The Village

In The Village will take you into the most exclusive areas of the 2024 Paris Olympic Games to explore the daily life of athletes, complete with all the funny, mundane and unexpected things you learn off the field of play. Join Elizabeth Beisel as she sits down with Olympians each day in Paris.

3. iHeartOlympics: The Latest

3. iHeartOlympics: The Latest

Listen to the latest news from the 2024 Olympics.

Music, radio and podcasts, all free. Listen online or download the iHeart App.

Connect

© 2024 iHeartMedia, Inc.