All Episodes

September 3, 2024 • 19 mins
OUR FUTURIST THOMAS FREY IS TALKING ABOUT THE FUTURE OF COMPANIONSHIP And he uses the example of a Buddy Bot and an old cranky guy to get us started down the path to robot companionship. He joins me at 1 to discuss. Find Thomas to speak at your function by clicking here.
Mark as Played
Transcript

Episode Transcript

Available transcripts are automatically generated. Complete accuracy is not guaranteed.
Speaker 1 (00:00):
Thomas fry our futurists.

Speaker 2 (00:01):
You can find him at futurist speaker dot com if
you'd like him to come and speak to your organization
about pretty much anything. Hi, Thomas, how you doing.

Speaker 4 (00:10):
I'm doing great today.

Speaker 2 (00:12):
Well, we already kind of talked a little bit about
the story the column that you sent me and earlier
on the show, and essentially it's about robot buddies. Tell
me about Tell the listeners who didn't read the article
about robot buddies.

Speaker 3 (00:26):
Well, I'm I'm pretty well convinced that we're going to
have access to this buddy bot within the next five years,
that we will be able to have an AI assistant
that we'll be able to talk back and forth to
that we'll wake up in the morning and it greets
us first thing in the morning. We'll find out things

(00:49):
that are happening around the world in the morning, we'll
find out things that are on our schedule during the day,
and it'll coach us through the day and anytime we
have a problem, it will help coach us through that.

Speaker 1 (01:03):
Now, let me ask this question.

Speaker 2 (01:04):
So these are what desktop robot assistance or they humanoid
robot assistance.

Speaker 3 (01:13):
Well, this is the buddy Bot will work on any device,
whether it's your your smartphone, whether it's smart glasses or
your computer, so it could actually be with you at
all times. But eventually it'll I think it'll actually be
an actual buddy bot robot.

Speaker 4 (01:35):
That we'll have in our lives. But yeah, so this is.

Speaker 3 (01:39):
An AI intelligence that we have access to at any
time we need it.

Speaker 2 (01:45):
So okay, wait a minute. So I I'm just going
to walk me through my day. Okay, So I'm gonna
I'm gonna walk me through this day. I get up,
I wake up, My buddy bot is there to say
good morning, you're awesome. It's time to get out of
bed and start the day. And I guess up. I
go in and take a shower and come back out.
I go downstairs, So do the buddy bot follow me
downstairs via the Alexa that's in my dining room?

Speaker 1 (02:07):
Like, how or is it on my phone? I find
that a little creepy.

Speaker 3 (02:12):
Yeah, it'll be be like it's on your phone. Yeah,
so anytime that you it'll it'll uh, it'll be the
the your personal buddy.

Speaker 4 (02:25):
It'll be your best friend.

Speaker 3 (02:27):
You get up in the morning and you'll tell it
all the things that you have going on in your life,
and and you'll ask for advice on all kinds of things,
and so it'll be like the friend you always wish
that you had. And so that that, I think is
something we're all striving for. And there's a lot of

(02:48):
people who are very lonely in the world right now,
and this is something that's perfectly designed to fix that problem.

Speaker 2 (02:56):
Why does it sound like one of those little things
a rod were those like Japanese toys that you were
supposed to take care of and it would cry Tamagachi?
Why does this feel like a Tamagatchi gone bad?

Speaker 1 (03:09):
Like if I have you seen dual Lingo?

Speaker 2 (03:11):
Let me just take this in a completely unrelated, seemingly direction.
Have you ever signed up for dual Lingo the language app?

Speaker 4 (03:19):
Haven't signed up for that now?

Speaker 1 (03:21):
Okay?

Speaker 2 (03:21):
I signed up for it before we went to Norway
to learn a few words in Norwegian. Okay, But I
then I've stopped because I'm not going back to Norway
anytime soon. You do A Lingo is now taunting me? Okay,
It's now like, Oh, I guess you really didn't want
to learn in Norwegian anyway?

Speaker 1 (03:35):
Oh I guess you're a quitter, and all I could.

Speaker 2 (03:38):
Think of is my little my little buddy bot being like, Oh,
I guess you're not going to work out this morning, Mandy.

Speaker 1 (03:43):
I guess you're not going to do I mean, how
do they can we program a non nag feature?

Speaker 3 (03:54):
Yeah, I think there'll there'll be a nagging scale and
you can dial it down when you want to.

Speaker 1 (04:00):
Zero on the nagging scale zero.

Speaker 2 (04:02):
So how far are we from having humanoid robots that
have AI and that can be with us or you know,
we've talked about this before. I think this is perfect
for older people who are very, very lonely. There's an
epidemic of loneliness for older people, and this could.

Speaker 1 (04:20):
Be a wonderful way.

Speaker 2 (04:21):
You program a robot to say, tell it it grew
up from nineteen thirty to nineteen, you know, fifty in
its formative years, so it absorbs all that information and
you could talk to it like a peer.

Speaker 1 (04:32):
Right, when do we get to that? That's kind of
what I want to know.

Speaker 4 (04:38):
Okay, the humanoid robot.

Speaker 3 (04:41):
We'll have some crude examples probably within the next five
years that we can own and have in our houses.
But I like to remind people that The cars that
we drive today have been in development over one hundred
and twenty years, and so it's taken that long to
get to cars that are this good. So the humanoid robots,

(05:02):
it's going to take a long time of constant redevelopment
and in re engineering to get them to a point
where they're they'll be more sophisticated than cars naturally, but
it's going to take a while, certainly, So in twenty years,
I think we'll have some good robots.

Speaker 2 (05:21):
Let me ask this question because we were talking about
this earlier too, Thomas, and that is how much does
the kind of leaps forward that we've had in computing
power shorten that timeline? Because to your point, when they
started building cars, they didn't need I mean, they used
carbon copy paper to you know, send plans to somewhere else.

(05:42):
So the technology that they were using was so rudimentary
to get us to where we are today. So starting
where we are today with the advanced computing that we
have and the sort of computing power that we have,
how much does that shorten the timeline in your mind?

Speaker 3 (05:58):
Well, the timeline is continually getting shorter as we move
through the years. So right right now, it's probably half
the amount of time. Ten years from now, it might
be a quarter of the times. As AI gets gets
smarter and more intelligent, more capable, it gives us more

(06:23):
abilities to do things, and so naturally it'll give us
much more sophisticated designs as well.

Speaker 2 (06:30):
You know, we've seen movies like Her where Joaquin Phoenix
falls in love with his essentially his series, you know,
on his phone her. What did I say? Okay, I
did say right, he was just backing me up her.
And I worry about this, on the one hand, replacing

(06:51):
real human relationships for young people of child bearing age, Like,
I'm not worried about older people falling in love with
the robot because they're not going to reproduce, and if
a robot makes them happy, then more power to them.
But I am worried that you're going to be able
to create someone who is so perfect for you in
every way, shape or form, that it's going to replace
real human relationships. What are the chances of that happening

(07:14):
in the next fifty years.

Speaker 4 (07:18):
Well, there's a good chance of that happening.

Speaker 3 (07:21):
Not so much with the mechanical robots, WHI should be
kind of a phase one the next the next face
will be the fleshbots, where we actually growing skin and
in flesh on these these machines and they will seem
much more human. Oh yeah, that's where it gets really scary.

Speaker 1 (07:39):
That just grossed me out for some reason.

Speaker 2 (07:41):
The phrase flesh bot just like that's like moist it's
it's not a word that you want to think about that.
I mean, how do you like you would have like
skin farms that they would just be growing uniforms to
slide on. Oh jays, that's that's the stuff nightmares is
made out of right there. If I'm going to have it's.

Speaker 4 (08:00):
The next level. Yeah, it's the next level, chiapet.

Speaker 2 (08:05):
If I'm going to have a buddy, a robot buddy,
I want it to be I want it to look
like a robot. I don't want it to look like
a human. I want I mean, I wanted to look human,
like I wanted to have arms and legs and move
around like I do and maybe have a pleasant demeanor
on its face kind of thing. But I don't want
it to look like me. Only fake that's that's for me.

(08:26):
Is just weird. When we get to the point where
you can't tell the robots from the humans. That's not
where I want to be. I don't want to be
there because that's just weird.

Speaker 3 (08:35):
Okay, all right, Yeah, yeah, they'll find out when they
do the testing on these things as to how intelligent
we want our robots, because we'll have the ability to
have different intelligence levels. If if we want it to
be as smart as a human, that's, well, that's one level.

(08:56):
If we want to just have something that washes our
dishes and cook our food for us, that's that's less intelligent.

Speaker 4 (09:02):
That's what I want.

Speaker 3 (09:03):
Something that's a real good sparring partner in philosophy, and
its something that might be two intelligence levels above us.
And so we'll be able to decide that and the
more intelligent robots will cost more.

Speaker 2 (09:19):
Well, all I could think of is that, you know,
going from a robot, and of course there are already
And if you have kids in the car, you may
want to just dial away for just a second and.

Speaker 1 (09:30):
Then come back in just a minute.

Speaker 2 (09:31):
I'm warning you ten nine eight five seven, Okay, turn
the station, all right. So there's already sex spots that
are being built that are very realistic when you look
at that, and I could just imagine a guy or
even a gal being able to dial that intelligence level
up and down. You know what I mean, Like, oh,
right now, I don't want you to talk and then

(09:52):
where we get back, I want to argue, you know,
still at philosophy with you for.

Speaker 1 (09:55):
About an hour and a half.

Speaker 2 (09:56):
I mean that that replaces another human being in ways
that we just cannot compete with.

Speaker 4 (10:05):
So how much would you pay for that?

Speaker 2 (10:07):
I'm thinking really hard about that right now, Thomas. I
don't know, but I don't want to replace my human
relationships with I don't want to see this person, said Amanda.
You basically want to see three PO, but I don't
want them shiny, because that seems like a lot of upkeep.
Like three PC, that's a lot of shining on three CPO.
So no, let's just say there's a little bit of

(10:28):
skepticism on the text line about this, Thomas. Let me
share some of these Mandy, big brother in your face.
How would we know that our robot buddies were not
just sucking up information and spying on us from the government.

Speaker 1 (10:43):
Could you prevent that somehow?

Speaker 3 (10:48):
Well, your buddy bot also has to be the guardian
of your privacy. That's part of the mandate that goes
along with the buddy bot.

Speaker 1 (10:58):
I like that, but could the buddy bot be you
don't want to share secrets? Could the buddy bot be hacked?

Speaker 4 (11:06):
Yeah, there's nothing that's hacker proof at the moment, but
would it be as hacker proof as you can get?

Speaker 1 (11:13):
Uh, Mandy?

Speaker 2 (11:15):
Would my buddy bought be subject to subpoenas and other
authoritarian legal processes?

Speaker 1 (11:20):
This is I don't know.

Speaker 2 (11:21):
Who is listening to the show that would ask that question.

Speaker 1 (11:24):
But here we are, Thomas. Would they throw you under
the bus with the cops? What would they do?

Speaker 4 (11:32):
Well?

Speaker 3 (11:33):
Kind of a similar question is is would you give
your personal robot your bank account or would it have
its own bank accounts? Because if you sent your robot
to the grocery show to get some supplies, would you
wanted to have its own bank accounts so that it
couldn't tap into yours?

Speaker 4 (11:54):
And then if it gets hacked? Does that open up
the do you lose whatever you have bank account?

Speaker 2 (12:01):
I believe I would have a joint account that my
buddy bot had access to that only had enough money
for the household stuff. I wouldn't connect it to my
other bank accounts that were mine I would. That's a
really good question, Like and do you give it its
own credit card? And how do you know it won't
stay up at night shopping on QVC. You don't know
any of these things. We don't know, Thomas. We don't
know what's going on with this Who who is working

(12:28):
on the tabletop AI right now?

Speaker 4 (12:31):
Like?

Speaker 3 (12:31):
What?

Speaker 2 (12:32):
Because now we have Alexa and Alexis sucks And I'm
sorry if I just turned her Alexa on to say
that Alexa sucks. But Alexa is very biased. I don't
know if you've seen this. Now people are asking Alexis,
well Alexa, why they should vote for Trump, and Alexis says,
and I can't tell you anything. And then you say
why should I vote for Kamala Harris? And he gives
a whole list of reasons that you should vote for
Kamala Harris. So is on a scale of Alexa to

(12:55):
humanoid robot? Where will this AI, you know, companion be?

Speaker 4 (13:03):
Yeah? I think it learns from the owner. I think
it has to be this.

Speaker 3 (13:08):
Constant relationship building bot that learns more about you every day.
It learns what your habits are, what things you're good at,
what you're not good at. It learns where you need help.
It becomes your personal therapist. It becomes your best friend,

(13:28):
your buddy. It can finish your sentences for you. It
can coach you through all kinds of situations.

Speaker 4 (13:36):
That's the buddy bot that I want. I want something
that I can rely on for all kinds of difficult situations.

Speaker 2 (13:43):
The first thing I think, though, is that whoever programs it,
just like I just said with Alexa, whoever's programming it
has the ability to inject all of their own biases
and everything else in it. Or does it kind of
start from scratch with you and build from there.

Speaker 4 (14:01):
That's what it should work, That's how it should work. Yes,
it should be very very personal, very much a one
off relationship, and it would end up. I mean, if
you have an accent, it'll develop that accent.

Speaker 1 (14:17):
Yell you.

Speaker 4 (14:20):
Yeah.

Speaker 3 (14:20):
If you speak in a different language, it'll speak in
that language. So these things will be very versatile, very talented.

Speaker 1 (14:29):
This is kind of amazing. I don't know.

Speaker 2 (14:31):
I might feel like it was if I woke up
one day because when I was growing up, I had
a very significant Southern accent, very significant Southern accent. And
sometimes when I go back to my hometown. That southern
accent sneaks in, right, it just kind of it just
it kind of dips its toe in, and that if
my robot started speaking like that, I think I'd be annoyed.

(14:54):
I'm like, you're just trying to be like me. Stop
doing that?

Speaker 1 (14:57):
And then would it change and not do that anymore?

Speaker 4 (15:01):
Right? Right? Theoretically an instantly change, so and you can.

Speaker 3 (15:08):
You can pick out different voices too, if you'd like,
rather than a female voice, have a male voice that's
talking back to you. If you want an Australian accent,
you can have that, or a British accent.

Speaker 4 (15:23):
So you'll you'll have a lot of variables to play with.

Speaker 1 (15:25):
So here's my question.

Speaker 2 (15:26):
As a well known person on a radio with a
distinctive voice, could I market my voice as a potential
voice for your AI robot? Could I Mandy be your
best friend in your phone with my voice and get
paid for it?

Speaker 3 (15:43):
Absolutely? Absolutely, But it may not be. It may not
be your your path to riches because because people are
probably going to be rather fickle and they say, I
like this voice for the next week, and then I
will go with something else next week after that.

Speaker 2 (16:01):
Now, when you get your when you get your AI
robot friend, do you choose do you go to like
a checklist of I'd like a snappy attitude, I'd like
witty repartee, I'd like a deep knowledge of classical music.
Do you like pick that out? Or do you just start?

Speaker 3 (16:19):
My thinking is you just start and it picks up
on kind of kind of where you leave off wait,
where you have questions, where you have difficulties, and it
understands what you respond best to. And if you're a
grouchy curmudgeon like the person that I have in the story,

(16:43):
it will respond in kind of the way that you
would have to respond to somebody like that I love.

Speaker 4 (16:50):
Yeah, it'll be very smart.

Speaker 2 (16:52):
One of my texters keeps asking over and over again
if it has body parts other than the brain.

Speaker 1 (16:58):
But I'm not going to read it because it's just rude.

Speaker 2 (17:00):
So you can stop texting now, Texter, I'm not asking
your question. And those are different kinds of robots, and
they're already available, So there you go.

Speaker 1 (17:07):
Knock yourself out.

Speaker 2 (17:08):
Uh, Mandy, let's see good. Could it go to rock
concerts with me? Does it need a passport? What if
I leave the country?

Speaker 3 (17:20):
Yeah, Well, if it's if it's tied to your phone,
it just goes with you wherever you go. It knows
whatever conversations you've been in. So like if you if
you'd a little later on you want to, uh, you
want some clarification on one of the points that that
person talked about, then it could actually repeat it and

(17:42):
actually go into detail on that.

Speaker 2 (17:44):
Wait a minute, did you just tell me that I
could ask it what happened in a conversation I had
with my husband where I know I'm right and I
know he's wrong, And then I could have my little
personal assistant play that back and be like, Yo, this
is what really happened. Sign me up, take my money.
Where do I need to go to get this right now?

Speaker 4 (18:04):
Yeah?

Speaker 3 (18:05):
But if you ask it who is right and who's wrong,
it'll probably have kind of a delicate way of following
out not answer.

Speaker 2 (18:12):
I don't need to answer right or wrong. I just
need to hear what words were actually said. Sometimes, Thomas,
that's all I just need. I need a transcript of
a conversation. That's all I need. Although I might find
out that I'm wrong far more often than I think.

Speaker 4 (18:28):
Yeah, I think this will this will be the key
to unveiling a lot of interesting facts about ourselves that
we didn't know about.

Speaker 2 (18:37):
Yeah, exactly, exactly, and okay, never mind, take my name
off the waiting list.

Speaker 1 (18:42):
I don't want this at all.

Speaker 2 (18:43):
Thomas Frye, we appreciate you every single month.

Speaker 1 (18:46):
You can find him.

Speaker 2 (18:47):
At futurest speaker dot com. I put a link on
the blog today if you'd like to have him come
speak to your organization about any aspect of the future,
including what you do for a living.

Speaker 1 (18:56):
Thomas, good to see my friend. I'll talk to you soon.

Speaker 4 (19:00):
Thanks man.

Speaker 1 (19:01):
Having great day for you two. That is Thomas Frye.

The Mandy Connell Podcast News

Advertise With Us

Popular Podcasts

On Purpose with Jay Shetty

On Purpose with Jay Shetty

I’m Jay Shetty host of On Purpose the worlds #1 Mental Health podcast and I’m so grateful you found us. I started this podcast 5 years ago to invite you into conversations and workshops that are designed to help make you happier, healthier and more healed. I believe that when you (yes you) feel seen, heard and understood you’re able to deal with relationship struggles, work challenges and life’s ups and downs with more ease and grace. I interview experts, celebrities, thought leaders and athletes so that we can grow our mindset, build better habits and uncover a side of them we’ve never seen before. New episodes every Monday and Friday. Your support means the world to me and I don’t take it for granted — click the follow button and leave a review to help us spread the love with On Purpose. I can’t wait for you to listen to your first or 500th episode!

Dateline NBC

Dateline NBC

Current and classic episodes, featuring compelling true-crime mysteries, powerful documentaries and in-depth investigations. Follow now to get the latest episodes of Dateline NBC completely free, or subscribe to Dateline Premium for ad-free listening and exclusive bonus content: DatelinePremium.com

Stuff You Should Know

Stuff You Should Know

If you've ever wanted to know about champagne, satanism, the Stonewall Uprising, chaos theory, LSD, El Nino, true crime and Rosa Parks, then look no further. Josh and Chuck have you covered.

Music, radio and podcasts, all free. Listen online or download the iHeart App.

Connect

© 2025 iHeartMedia, Inc.