Episode Transcript
Available transcripts are automatically generated. Complete accuracy is not guaranteed.
Speaker 1 (00:00):
Thomas Frye joining us from the future. Well, I mean
maybe you're a little you know, further east than I am.
Speaker 2 (00:07):
I don't know. Good to see you, Thomas, Yeah, on
your show again. So you sent me.
Speaker 1 (00:14):
This paper that you'd written about a robot changing a diaper.
And when you say it like that, like, because I
said this earlier in the show, when will a robot
be able to change a diaper? And it sounds like
a simple proposition when you think about when you had
to take those life skills classes and they gave you
the doll and you had to do the triangle for
the diaper, and you're like, sure, a robot could do that.
(00:36):
But then once you become a parent, you realize that
it's like trying to put clothes on a fighting you know, octopus,
angry octopus, and that's pooped everywhere, you know. So there's
a lot of different layers to it. Why is it important? Well,
what kind of milestone is it for a robot to
be able to change a human baby diaper?
Speaker 3 (00:59):
Well?
Speaker 4 (00:59):
See, I I view this as kind of the turning
test of humanoid robots because it requires, uh these tactical skills,
It requires the ability having adaptability and the resources to
deal with the screaming child, and and lots of variables
that normally don't get programmed into something like this. And
(01:21):
then also the ability to gain the trust of the mother.
Speaker 3 (01:26):
And that's the key thing here.
Speaker 4 (01:28):
I think that if if you don't get the mother's trust,
they will never buy a robot for their their home.
Speaker 3 (01:36):
UH. And so this is one of the.
Speaker 4 (01:40):
Kind of the ultimate with litmus tests. I think of
of of a robot like this, they can they handle
all of the different things, the the the different UH
smells and the different chaotic moments and all of that
and UH and still.
Speaker 3 (02:00):
To perform a flawless job.
Speaker 1 (02:03):
So you just mentioned the touring test, and what is
the touring test? And t you r I n G
is what we're talking about? Alan Touring?
Speaker 4 (02:12):
Yeah, Alan Turing in around nineteen fifty he was just
asking this question can machines think? And so he came
up with this Turing test. And the crude form of
the test was as you go into a room and
it has two curtains, and behind one curtain is a
(02:32):
computer and behind the other curtains a person. You start
asking questions and the computer can answer and the human
can answer, and at a certain point in time, as
the computer evolves, then he sees surmised that you wouldn't
be able to tell the difference between which is the
computer and which is the human. And so we've used
(02:57):
that kind of that Turing test as kind of a
litmus test for how far we've advanced with computer technology.
And clearly we've passed the Turing test and several different categories,
but there's a lot more categories to go in the future, because.
Speaker 1 (03:16):
I mean chat GPT can have a conversation with you
that is, you know, realistic enough that if you truly
didn't know that it was a computer, you maybe could
be fooled until they tell you something wrack adodle. I mean,
you know, but I think that's an interesting standard. So
to be passable, let's just call it passables. That's a
(03:37):
good way to put it. To be passable as a
human in a conversation with robotics, What is it about,
specifically the physical part of changing a diaper that would
have to be addressed that we haven't already addressed, if
you know what I.
Speaker 4 (03:50):
Mean, Yeah, I think it's all the variables a child
is squirming and screaming and crying and fussing and around
and all of this is kind of delicate, little little pieces,
so you don't want to hurt the child, you don't
want to pinch the skin.
Speaker 3 (04:11):
There's all kinds of things that can go wrong in
the middle.
Speaker 4 (04:14):
Of all this, and for people this becomes real high
anxiety situation.
Speaker 3 (04:21):
At times.
Speaker 4 (04:22):
Most of the time it's fairly straightforward and you can
change diaper, zip, zip, zip, and you're finished. Other times
it turns, especially when you're in public trying to do something.
Speaker 1 (04:35):
I think every parent has at least one story of
that time my kid pooped all over me in public.
I think everybody has at least one of those in
their arsenal. If you're a parent, you know, and then
you're in the in the you know, in the room trying.
I absolutely understand that. So how far are we away?
Because I have seen some videos lately that have been
(04:58):
so realistic that I have to go back and I'm like,
is this AI?
Speaker 2 (05:02):
Is it? Is it real?
Speaker 1 (05:04):
I hate that AI video has has gotten to the
point where it truly is impossible to tell if what you're.
Speaker 2 (05:12):
Seeing is real or not. And where are we in
the robotics?
Speaker 1 (05:16):
Process right now, Like, how close are we to having
a robot that has just a soft enough touch that
a mom would feel confident giving her baby to that robot.
Speaker 4 (05:28):
Okay, to put this in perspective, there's over one hundred
companies throughout the world that have received funding for creating
humanoid robots, so it's not possible to know exactly where
all of them are and what stages are at. But
in some of the videos I've seen of optimists, that's
(05:55):
the Elon Musk robot that it was creating.
Speaker 3 (05:58):
The way the hand.
Speaker 4 (06:01):
Moves and is manipulated, and everything seems very It has
some very delicate skills tied in with it. I think
I think at least by two thousand and thirty, if
not sooner, he'll be able to accomplish something like this.
But again, the real litmus test ends up being is
(06:24):
the mother going to trust the robot to do it?
And I think there's a lot of barriers there. If
the robot just screws up in one way or another,
if the kid gets pinched one time, then they're going
to be very hesitant to try it again.
Speaker 1 (06:42):
Oh well, I got to tell you, I'm already thinking
of a science fiction thriller where in the future there
is a world and robots are starting to become a
normal part and then this extremist organization who doesn't want
any kind of robotics goes in and actually kills a
baby and frames a robot so they can prevent more
robots from being purchased. That's my new just popped into
(07:03):
my head right now, So if anybody wants to write
that with me, let me know. But that's how my
brain works. Mandy, and I think this is a valid question, Thomas,
How long before baby pas all over the robot? These
robots are going to have to be durable, waterproof, They're
going to.
Speaker 2 (07:18):
Have to be able to function in a urinated on state.
Speaker 4 (07:21):
Correct, right, They're going to be asked to do all
kinds of crazy things, not just infant care, but elder care,
managing people, and.
Speaker 3 (07:36):
Therapeutic situations where.
Speaker 4 (07:39):
PTSD issues are occurring, as people that are screaming and
yelling and probably narcotics are involved, and you have to
get somebody off of their the high that they're on.
You can start going through a list of all kinds
(08:02):
of psychotic situations that they may be thrust into, and
then you can start seeing the total kind of chaos
that they're going to have to contend with. I think
there's going to be lots of challenging situations. I'm not
sure this is very straightforward at all.
Speaker 1 (08:21):
Well, let me ask you this, because, like when I
think about this and I think about what robots could
look like in the near future, we think of humanoid robots.
And I'm just going to use data from Star Trek
Next Generation as an example, because technically he was a droid,
but it's okay, we're going to use him as an example.
Are we going to get to a place where they're
going to have a human skin or a circulatory system
(08:43):
exterior that is going to allow them to have a
true human appearance, because that's I think what freaks people out.
But on the flip side, you know that that could
solve a lot of problems for people who are lonely.
I mean, there's there's so many different weird applications for.
Speaker 3 (08:59):
This, right I think.
Speaker 4 (09:03):
I think the skin is going to change quite a
bit over time as they become more human like and lifelike,
and this idea of actually growing the skin on the robot,
I think we're going to be getting into that fairly soon.
So these are some of the things that we'll have
(09:23):
to contend with us as robots evolve over the next
forty fifty years. Keep in mind that the cars that
we drive today have been in development for the last
one hundred and twenty plus years, so it's taken that
long to get.
Speaker 3 (09:38):
The cars that are this good. Now.
Speaker 4 (09:40):
To get to a robot that is so finally refined
that it can handle all these human like tests, it's
going to take a while.
Speaker 2 (09:49):
Okay, I'm okay with that.
Speaker 1 (09:50):
I have one more question about the robot thing, though,
and that is like, in my mind, if I were
going to have a robot caring for my child, which
is not a given by an stretch of the imagination, okay,
so I would want to know that they were programmed
with everything like first aid, so they'd.
Speaker 2 (10:07):
Know exactly what to do. And I was thinking to myself.
Speaker 1 (10:10):
Like, would there be any limits to what you would
want your robot to know? Or do you want a
true know it all robot that you could go to
at any given moment and say, robot, you know what's
the flight speed of an unlaiden swallow and ask them
the question and get an answer so.
Speaker 2 (10:27):
You know, how are there are you going to compartmentalize
that knowledge? I guess this is my question.
Speaker 4 (10:36):
Well, if you have your five year old kid hanging
out with robot asking a lot of questions, you may
want to limit what they're willing to talk about, right right.
Speaker 2 (10:45):
Oh yeah, that's probably a good idea. It's probably a
very very good idea.
Speaker 1 (10:49):
An extraordinary number of people are hitting the text line
with robot sex doll conversations. What is wrong with you people?
But that's not what I was talking about when people
were lonely. I'm talking about the epidemic of loneliness in
this country, where it would be kind of cool to say,
I'd like to create a companion for me, and you
could program the knowledge set that that robot got, and
(11:11):
maybe you say, look, I want them to have the
same historical progression that I had growing up, So you
can almost build yourself a peer And as people get older,
and I mean in their eighties, late eighties, nineties, most
of their peer groups starts to die. And that was
something I learned from my late mother in law. She
just wanted to talk to people who understood the references,
(11:32):
you know, she wanted to talk to people who had
the same.
Speaker 2 (11:34):
Cultural art that she had.
Speaker 1 (11:36):
So I think from that perspective, it could be a
really interesting situation where you'd have a care you know,
maybe a residential facility full of older people, and you
could literally program them to be a different friend for
every single person in that.
Speaker 3 (11:52):
Place, right right?
Speaker 4 (11:55):
And if you think about how long would it be
before somebody is willing to leave and go out on
a date with their wife or their husband and just
let the.
Speaker 3 (12:04):
Robot care for the kids.
Speaker 4 (12:06):
Yeah, that'll be a different stage as well. Having the robot,
you know, prepare dinner for a room full of guests,
people that show up for you're having invited twelve different
people over for dinner and you have the robot prepare
everything and then clean up afterwards and do the dishes.
(12:31):
How skilled does that robot have to be to accomplish
all of that? And is this a humanoid robot or
is this just a kitchen with robotic arms that are
doing most of that work.
Speaker 1 (12:45):
I got to tell you, I want a humanoid robot,
at least humanoid ish, right, like those weird Honda dogs
with no heads. I don't want that running around my property.
I want something that looks more recognizable. That isn't the
stuff nightmares are made out of, right, I want something
that feels a little more rosy the robot than you know,
a faceless kind of droid that's I don't know, that's
(13:09):
that straddles a weird line between you know, human and
plastic that I wanted to somewhat be something that looks identifiable,
I guess.
Speaker 4 (13:20):
Yeah, So if you yell at your robot dog, you
wanted to acknowledge that it's being yelled at.
Speaker 1 (13:26):
It instead of it turning around and you know, flipping
me off. If robots are used for crime, then who
is responsible? These are the reason I read that question
because it is kind of silly, but it's not. That's
part of the ethical dilemma in robotics like that, that's
a big part of it. Can you order a robot
to commit a crime? You know, I don't know who's doing.
Speaker 3 (13:47):
That, right, right? Yeah? Yeah, we we think we have
problems now.
Speaker 4 (13:56):
I think we're going to get into really implicated things.
But when we get into a world of robots soldiers
and robot fighting machines, then kind of all bets are
off because it's not just humanoid robots it's flying robots
that are drones.
Speaker 1 (14:17):
Right, That's what concerns me. That concerns me a lot more.
The drone technology has come so far, so fast that
I think drones are really the next frontier when it
comes to warfare, more so than humanoid robots, because they're
way cheaper and they can be deployed right now, and
(14:37):
the kind of precision nature that drones deliver is very
concerning to me. I think that might be one of
the most end up being one of the most significant
weapons for the next twenty years, used by everybody, terrorists,
I mean, everybody scares the crap out of me.
Speaker 2 (14:54):
I'm not going to lie, Thomas.
Speaker 3 (14:56):
Yeah.
Speaker 4 (14:57):
And then when you start thinking about, well, what form
can the these robotic soldiers take, they can look like
a robotic snake as an example, yep, or a robotic
fish that goes through the water and jumps out and
blows up.
Speaker 1 (15:12):
Thank you, Thanks for another thing to worry about fly fishing.
Speaker 2 (15:16):
Thanks so much, Thomas.
Speaker 1 (15:18):
It's bad enough I'm not going to catch anything now
I have to worry about a giant fish leaping out
of the water and using laser eyes to mow me
down right there in the river.
Speaker 4 (15:27):
Oh my god, robot fish takes your fish off the hook.
Speaker 2 (15:32):
Hey, yeah, there you go, Yeah a whole. It just
eats it whole. No, did you see?
Speaker 4 (15:37):
And I don't know.
Speaker 1 (15:38):
I saw this last week. I should have found it
and put it on the blog today, but I forgot.
South Korean scientists have made tiny little nanobots that are
based on ants, and it's absolutely the same concept as
the movie Big Hero six. And I don't know if
you watch the Big Hero six, it's very good. But
the whole thing is this kid invotes these little nan robots,
(16:00):
and of course then someone steals and to use.
Speaker 2 (16:01):
Them for evil.
Speaker 1 (16:02):
But it's amazing to see they have these little tiny
almost like grains of sand or not sand rice, and
they can move things and they can program these little
robots wort collaboratively to move stuff the same way ants
move stuff. And I'm thinking to myself, this is crazy.
I mean, this is science fiction.
Speaker 3 (16:23):
Yeah, they're they're coming in every every scale.
Speaker 4 (16:26):
You can imagine every shape, every possible form, and they'll
have multiple capabilities that we can we can just start
going down a checklist.
Speaker 3 (16:36):
Can they fly? Can they swim?
Speaker 4 (16:38):
Right? And they jump? And they can they climb a tree?
Can they attach themselves to a car or a moving vehicle?
And that's the world we're moving into, and somehow we've
got to create limitations to prevent this from just totally
disrupting the world that we live in.
Speaker 1 (16:59):
Well, I mean, we we've all seen the science fiction movies,
and I'm just going to lean into it. You know,
when the robots take over, Yeah, it'll be fine. I mean,
can they do any worse than we're doing right now?
On so many levels? So many levels. I also real quick,
I want to get to this before we run out
of time. I also linked to a story that you
did about kids making a move away from college and
(17:23):
yet still being able to have a successful career, and
I wanted to ask you about this specifically because we've
been talking a lot, and I know we don't ever
talk about politics, but there's been a lot of conversation
about H one B visas and immigrant labor, especially in
the tech.
Speaker 2 (17:39):
Field, and things of that nature.
Speaker 1 (17:41):
So it seems that what you're saying in this article
that I linked today on the blog is that there
are companies that are offering certifications and other things to
let tech minded kids who don't necessarily need to go
the college route, go ahead and pursue that as a career.
Speaker 2 (17:59):
Is that what we're can I know?
Speaker 4 (18:01):
Right right, there's lots of alternatives out there. They can
go to trade schools, they can do apprenticeships, they can
take online courses, go to community college coding boot camps,
and these certifications are actually becoming an alternative to college degrees.
And so that's becoming very interesting. And a lot of
(18:23):
them are just going straight into entrepreneurship. They're just going
to start their own company and run with it. And
so how do we credential people that have this misshmash
of skills that.
Speaker 3 (18:38):
Are coming into the workforce right now?
Speaker 4 (18:41):
Now there's this the declining birth rates and the expensive
tuition and the diminishing ROI that people get off going
to college is amounted to only thirty six percent of
people express confidence and higher education right now. Wow, So
that number keeps declining because the price keeps going up.
(19:05):
I mean, we have one point seven trillion dollars in
student loan debt in the country and in a huge
portion of that, three point five million people over the
age of sixty still over one hundred and twenty five
billion dollars.
Speaker 2 (19:23):
That's a lot of money for retirement.
Speaker 4 (19:25):
Yeah, so this is this is a problem where we're
just taking money from the future, and while it enriches
us today, this creates this massive problem moving forward into
the future.
Speaker 1 (19:41):
Well, I wanted to bring up the article because I
think that I know people that have gone this route
who did.
Speaker 2 (19:48):
Not go to high school.
Speaker 1 (19:50):
I mean, excuse me, did not go to college, but
got Microsoft certifications, got various network certifications, did a lot
of other stuff, and just went to work in tech.
And now at my age, so that was thirty years
ago before that was cool, and now they're vice president
level and been developing for thirty years.
Speaker 2 (20:08):
So it is a viable pathway.
Speaker 1 (20:10):
But it's nice that I think it's interesting because I
believe that Higher ed has one hundred percent done this
to themselves by not adapting, by continuing to cling to
a model that doesn't necessarily work for everyone. And now
it's we're going to We're going to see colleges clothes
with some great regularity I think over the next ten years.
(20:30):
But I think that's a good thing, and maybe they'll
get serious about delivering on a mission by making sure
that kids graduate with skills that can help them get
an actual job.
Speaker 4 (20:40):
When you think of the cost of managing an entire
campus and the student body, and the security, and the utilities,
and just all of the maintenance and repair an entire
campus like that, these things are very expensive, and so
they have to charge a lot of money to maintain
all that, And I just I think we're at a
(21:01):
point where fewer and fewer people are.
Speaker 3 (21:02):
Going to be willing to pay that, yep. And so we're.
Speaker 4 (21:06):
Going to see lots of consolidations. We're going to see
lots of selling off of assets. Mergers are going to happen.
So we're going to see lots of crazy activity in
the academia over the next even five years. It's going
to happen pretty quickly.
Speaker 2 (21:25):
Here Thomas Friar Futurists.
Speaker 1 (21:27):
You can find him if you want him to come
speak to your organization about anything from the future. He
is available to make that happen. Thomas, good to see you,
my friend. I'll see you next time, all right.
Speaker 3 (21:37):
Thanks,