Episode Transcript
Available transcripts are automatically generated. Complete accuracy is not guaranteed.
Speaker 1 (00:01):
Also media welcome back to It could happen here, a
podcast about things falling apart, and sometimes about stuff that's
less depressing than that.
Speaker 2 (00:18):
Today we're doing an episode that's I don't know, part
funny and part hey, you should be aware of this
thing because it's it's kind of fucked up.
Speaker 3 (00:30):
It certainly could happen. It probably shouldn't. It probably shouldn't
happen here, but it certainly could.
Speaker 2 (00:36):
But it certainly could. Garrison Davis is on the other line.
That I mean other line. This isn't a phone call.
That's the other voice that you are hearing right now.
And earlier this year, Garrison and I went to CEES,
the Consumer Electronics show in Las Vegas, Nevada, where Garrison
had a wonderful stay at Circus Circus that did not
smell like dead clowns.
Speaker 3 (00:59):
But we weasly did not just shut down this summer
due to horrible infestation problems.
Speaker 2 (01:05):
Oh that's what you're staying next year too, buddy. Anyway,
we encountered while we were going through all these different
technology companies and whatnot, this very peculiar AI project, and Garrison,
I'm going to hand things over to you now because
you're the one who was actually prepared an episode.
Speaker 3 (01:22):
Yeah, So I dug into this AI project more when
I was making my ghost Conference episodes, and after just
a few minutes of like doing like background checks and stuff,
I realized that this would become its own episode because
of how wild things got very very quickly. This company
(01:43):
is called mind Bank AI. As the name suggests, they
are an AI company based in Florida with the goal
of creating personal digital replicas of living humans using artificial
intelligence and an evolved n LP or natural language processing.
Speaker 2 (02:04):
Yeah. Basically, these are.
Speaker 3 (02:06):
Algorithms that are used by GPT, chatbots, predictive texting, and
the digital assistance like Alexa and Siri. Yeah, language models
that respond to feedback. They're pretty common these days. We
encounter them a lot, right that they're whenever you're typing
on your iPhone, they they will generate text that they
(02:27):
think you're gonna write. But what mind bank is trying
to do is a little bit different.
Speaker 4 (02:33):
Yeah.
Speaker 2 (02:34):
When we encounter them at Cees, their booth had on
these signs that were it was stuff like like, you know,
set up a legacy for your kids. You know, it
was basically advertising. This is a way to allow a
part of you to exist in digital form and communicate
with your with your descendants forever.
Speaker 3 (02:52):
Yes, so we found them in the US government sponsored
section of cees, which is already a great sign. Yes,
already already looking good. But unlike other kind of AI
digital copies of humans, which typically are just language models
that generate responses based on an archive of someone's writing
(03:15):
or recorded interviews or online presence, mind Bank instead seeks
to create an evolving, a unique digital twin by having
a person input their personal data basically tons of personal
information about themselves into an AI on an ongoing basis,
and by analyzing your data inputs. Mind Bank says that
(03:36):
your digital twin will quote unquote learn to think like you,
and their CEO claims that this process will eventually help
him achieve immortality.
Speaker 2 (03:48):
Oh oh, that's good. I hadn't caught that when we
talked to the guy that he believed that that I love.
Whenever you get these guys who are like I will
just offload my brain onto a machine and then I
will live for ever in the cloud. And of course, man, yeah,
that's how consciousness works. Absolutely, buddy.
Speaker 3 (04:07):
All right, I'm going to play this video next.
Speaker 5 (04:12):
Humanity is limited. Our bodies age, our memories fade. Technology
outpaces evolution. The solution is your personal digital twin, transfer
your wisdom, become the best version of yourself, and live
(04:34):
forever through data.
Speaker 3 (04:36):
Oh boy, let's go beyond.
Speaker 2 (04:40):
So all right, I gotta know one thing before you
start in Garrison, which is that when they note mentioned
that like technology, like there's a line about like technology
making everything better, they're showing a man who has lost
his leg walking on a treadmill with an artificial leg.
And look, I think I have so much admiration for
the people who make artific official limbs wonderful thing to
(05:01):
be doing great important work. They're not as good as
real legs. Everyone agrees with this, right, Technology is not
making it better. Yeah, that's what it says. She was
on the No, that's technology allowing someone to adapt to
a terrible, terrible thing that happened to them.
Speaker 3 (05:23):
But Robert, don't you want to live forever through data?
Speaker 2 (05:27):
No? No, I don't. I'm exhausted now, Garrison.
Speaker 3 (05:32):
Okay, all right, so let's go into this a little
bit more. Your immortal digital twin is made possible quote
by safely storing your data over the years, artificial intelligence
and computers of the future will have ample data to
compile a digital version of yourself and predict your responses.
So that that is their idea of how this thing works.
(05:55):
Another one of their very very funny YouTube videos, titled
The Vision, promises that quote, the next personal computer is
you store your memories, your infinite potential, take advantage of
AI enhanced humanity.
Speaker 2 (06:15):
God damn it.
Speaker 3 (06:17):
So that is their vision.
Speaker 2 (06:19):
My personal computer absolutely is not me because I do
not play Balderskate three very well, you know, like I
can't run it on my hardware.
Speaker 3 (06:30):
Well, that's that's why it's that's why you got to
buy the new Monster manual and then maybe it could
all just be in your brain.
Speaker 2 (06:36):
Actually, yeah, I am full of shit. D and D
is still better when you run hardware. Goddamn, this is the.
Speaker 3 (06:42):
One thing you actually can do pretty good by yourself.
Speaker 2 (06:45):
Why did I pick that one? Yeah, it's just so like,
I don't think most people buy this. I don't think
this probably is going to be a success. I don't
think most people. I think most people's reaction to this
is like kind of sneering, which is the right reaction. This, Yes,
but there are people who do feel this legitimately, and
that is a thing of almost unfathomable sadness. Like, yeah,
(07:10):
I had my angry atheist period like a lot of people,
but like I have so much. I'm so much more
okay with Christianity than i am this.
Speaker 3 (07:19):
Oh yeah, absolutely. So before I get into how this
is all supposed to quote unquote work, first I want
to talk about how the founder and CEO says that
he got the idea for this company, because I think
it puts into focus how he sees this product ideally
functioning in the future. So Emil Hamirez was writing a
(07:42):
train with his four year old daughter. She was playing
on her iPad and discovered Siri. She began talking with
Siri and asking it questions like what do you eat?
And do you have a mommy. I'll let I'll let
a meal tell the rest here.
Speaker 6 (07:57):
But thirty minutes later she was laughing and having really
like a nice time with Siri, and she said, Siri,
I love you. You're my best friend. And that struck
a chord with me that that inspired me so much,
because I said to myself at that moment, children don't
see computers and devices as a tool. They see them
(08:18):
as a companion. And today she speaks with Siri orle
Likes or any other device. But in the future, I
want her to be able to speak to me, to
be able to ask me a question, just like she
did the device. And understanding the technology, I know that
the only way that's possible. I'm able to take my
thoughts and put them in a cloud so that then
later she can access those that information. So that's how
(08:42):
the idea for my bank came about. It's a place
for you to store your ideas for the next generation
to tap into.
Speaker 2 (08:49):
No, so the generations already linger too long. We had
it right when people died when they were well, not died.
But Logan's run had it right. We should kill everyone
at thirty five. But this is this is so fucking offensive,
Like the the idea that for first off, like if
(09:12):
you're looking at we want a device, you know a
way to use technology to help people grieve or something,
and like you decide maybe having a chat about that
they get you. I'm sure it's possible that that could
be part of healthy grieving. I'm not going to say
that that there's no place for that, but something that
is definitely not just stupid, but toxic and poisonous is
(09:32):
having a machine speak with the voice of a child's
parent while that parent is alive, and confusing the child
as to whether or not the phone or their parent
is conscious. Like that seems bad to me.
Speaker 3 (09:45):
There's actually another product that that that does this right now,
which has kind of caused some controversy for this, for
this very thing you mention, it's a it's a Tarkara
Tommy smart speaker, which if listening to a parents have
always for fifteen minutes, can replicate it and tell your
child bedtime stories if you aren't physically present. And this
(10:07):
is this is similarly kind of like cause people to
have a whole bunch of questions around. You know, is
this good for a child's brain development to have to
have their parents' voice be coming out of like a
smart speaker. The answer is you probably not, but yeah.
Speaker 7 (10:24):
So.
Speaker 3 (10:24):
According to mind Bank's website, Emil's four year old daughter's
interactions with Siri quote started a quest in his heart
to live forever for his daughter. The quest for immortality
has led to something much bigger for humanity because the
next personal computer is you unquote. So there's that there's
(10:46):
that other line again about having this quest in his
heart is actually part of a bigger a bigger quest
for all of humanity to live inside a computer or
to have a computer.
Speaker 2 (11:00):
Yeah trained on you. He's he's he's hitting the same
speech cadences that guys like musk us like. He understands, yes,
the kind of he understands partially, the degree of hype
that you need to get something off this. But he
is he is going too hard. And I'm making that
judgment based on the incredibly comforting fact that as you
(11:21):
tell me these horrible things, I am looking at your
screen and mind Bank has seventy eight subscribers on YouTube,
so the company has not yet broken through.
Speaker 3 (11:30):
I do want to play one one ten second clip,
just because the phrasing is really funny.
Speaker 6 (11:36):
I was inspired by an interaction my daughter had with Siri.
What started as daddy's quest for immortality has led us
to something far kind.
Speaker 2 (11:46):
Oh my god, that's pretty funny, right.
Speaker 4 (11:50):
Man.
Speaker 2 (11:52):
But no, Robert, you were You were totally.
Speaker 3 (11:54):
Right about about kind of how Emil's like speech pattern
cadences is pushing a very specific thing. Because before Emil
got into the tech industry, for eighteen years. He worked
in marketing. He has degrees in psychology, communication and art
direction and business administration. He isn't a tech guy. He's
a marketing guy. And I think that's really good to
(12:16):
keep in mind throughout our whole discussion of how he's
trying to get funding for mind Bank, because that is
primarily what all of this marketing is for. It's to
attract investors. Because this is still he's still in very
early stages of this company. They do have a product,
it's out, but it's still primarily based on getting investors
to give him money.
Speaker 2 (12:37):
I think what's most disturbing to me about this is that, like,
this is not going to work for this guy because
he's a loser nobody cares about. But if Elon Musk
or one of our other many techno grifters, or if
a number of them got behind similar things, like I
think that the nightmare scenario to me is someday hopping
(12:58):
on Twitter to see that fucking Ian Miles Chong or
Ben Shapiro or Jackson Hinkle or any one of these
like horrible horrible social media poison distributors will be like
I have made an AI trained on my voice you
can have me all the time to argue, Like if
(13:18):
you want to, you know, you can ask me questions
or whatever. You go to a protest and have me
yell at liberals for you, Like, something like that will
happen at some point with one of these guys.
Speaker 3 (13:27):
I could not wait to bring Ben Shapiro to Thanksgiving
dinner and have him argue with Yeah, people around here.
Speaker 2 (13:36):
The next time you stay at my house with somebody
that you love and care about and feel comfortable in
the arms of you are going to drift off to sleep,
and then through the speakers that I have installed in
the room, you will hear Ben Shapiro's voice coaxing you
both to acts of love. Oh, that's that's what's gonna happen.
Speaker 3 (13:55):
So as an example of this kind of very marketing
heavy approach, I'm gonna I'm gonna read something from the
homepage of mind Bank's website.
Speaker 2 (14:04):
Quote.
Speaker 3 (14:05):
Our vision is to be the world's most trusted guardians
of your AI digital twin and move the human race forward.
Humanity's next evolutionary step is to combine ourselves with AI
and move humanity forward so that we are no longer
bound by anything that sentence is just marketing bubo jumbo.
(14:26):
It's it's it's meaningless hype like hype words and phrases
that refer to this like science fiction feature. But like
it's it's nothing.
Speaker 2 (14:35):
It's worse than meaningless. It's like it's it's wrong, it's
stupid wrong, Like the idea that like, you would not
be bound by anything if you could live inside in
the at bought, Like, yeah, I have I have an A.
I have used an AI, right, I have it on
my computer, my computer. Were I to hurl it across
the room in the same manner that I myself have
(14:58):
been flung, it would break and I would not like I.
Speaker 3 (15:02):
Am finally free to think within my computer's RGB gamer ram.
Speaker 2 (15:08):
Yeah, finally, Like when I have a laptop that gets
too old, Like the very act of surfing the internet
is a nightmare. I don't want my conscience on something
that ages at the speed of a smartphone. Likes, that's
even worse than being a person, Robert, do you know
what else is a very important evolutionary step for the
(15:29):
future of humanity? Oh God, I don't know when we
all suddenly spontaneously, as if by God's grace, starts speaking
with the voice of Ben Shapiro.
Speaker 3 (15:41):
Yes, and perhaps you can do that if one of
our sponsors is Ben Shapiro. Bought coming soon yep, to
a smartphone near you.
Speaker 2 (16:01):
All right, we are back.
Speaker 3 (16:03):
Let's finally talk about how this digital twin thing is
actually supposed to work. So you download the mind bank app.
I'm sure that's totally safe.
Speaker 2 (16:14):
Yeah, I trust this with all of my thoughts.
Speaker 3 (16:19):
And every day, your digital twin will ask you questions
about how you're feeling and what you're thinking about, and
as you tell it your quote unquote life story. Your
inputs will be used to train the twin to make
a more accurate digital copy of yourself. This is from
their This is from their website town page quote store.
(16:40):
Your conscience guided questions help train your digital twin to
know your life story so you can live forever through data.
The more questions you answer, the closer your AI digital
twin will get to becoming you.
Speaker 2 (16:53):
Unquote God in Heaven.
Speaker 3 (16:57):
So when Robert and I were at CEES this past
this past January, we spoke to mind Bank's co founder
and the director of Systems, Architecture and cybersecurity, and I'm
going to let him explain kind of some of some
of the process of asking mind bank questions and how
(17:18):
that helps craft this digital twin.
Speaker 7 (17:20):
We ask you questions from how's your day to what
does money meaning you?
Speaker 4 (17:23):
And you answer those questions with your voice in a
natural way. You can break a voice to text. You
get a sentiment analysis on the.
Speaker 7 (17:29):
Text and provide you a dashboard of what you're feeling
when you say that, so that you can also continue
to use it over time. And then as you use
it over time, the dashboard, we'll show you that you're
doing better or worse, just.
Speaker 4 (17:40):
Like a running application, what better or worse it?
Speaker 7 (17:43):
What whatever metric that you're interested in, your happiness, your awake,
your awareness, your.
Speaker 4 (17:49):
We have a very large amount of sentiment that we
can provide you with.
Speaker 8 (17:52):
Here's a small bits, but you can see kind of
what we have looks like Here you've got multiple different
possible types of sentiment, and then within each sentiment, you've
got multiple different factors that you can weigh against.
Speaker 3 (18:04):
To grow mind Bank's user base, there needs to be
some reason for users to input the massive amounts of
data that's needed to build this digital replica. So the
current model of this product is being billed as a
quote self care and personal development app where the user
talks to their digital twin, kind of like you would
(18:25):
talk to a therapist. Yeah, this is This is a
big part of mind Bank's marketing that as as you're
building this digital twin, it can be used as a
tool for self reflection and a way to quote learn
about yourself talk to your inner voice with your own
personal digital twin unquote, which is really funny because I
could talk to my I could talk to my inner
(18:47):
voice whenever I want to. Yeah, it's it's called thinking.
It's actually pretty pretty easy.
Speaker 2 (18:56):
This is I really I don't envy, but I'm fascinated
by the kind of people whose thoughts are so I
don't know a better word than distincal, no legal, that
they would think that they could just that they could
transfer everything they think over your machine and not get arrested,
right like I would be in a prison if I
had to put the things in my brain on the internet,
(19:18):
like I put a lot of them, but not all
of them. There are some very careful doors and locked
rooms in there that you people don't get access to.
Speaker 3 (19:27):
No, there's there's certainly a lot of interesting facets there
of someone feeling like they need this tool to to
kind of analyze their own thoughts. Like it's it's it's
a way to like externalize it that makes you process it.
But I don't know, you can also just like like
take up journaling or something like there's there's a lot
of a lot of ways to get around this. But
(19:51):
this is This is from a mind Bank's app store page.
Quote like a mirror to your soul. Each answer you
give allows you to get insights into your mind. It
will help you grow mentally strong unquote. So again, it's
like being able to talk to yourself with this digital
twin is a big part of their early push great
(20:14):
By using quote unquote cutting edge cognitive analysis, the mind
bank app responds to your data inputs with quote valuable
insights into each answer to understand how your mind works unquote.
The app also utilizes quote psycholinguistic models to create a
dashboard of the mind for personal development and self care.
(20:37):
I'm going to play another another fantastic kind of thirty
second clip here.
Speaker 6 (20:42):
Hi, I'm your personal digital twin. I learned by asking
many questions, each answer build my wisdom. You grow through
self reflection and I get a little bit closer to
becoming you.
Speaker 2 (20:55):
Let me show you around.
Speaker 6 (20:57):
Here's our training screen, where you can be your product
based on the number of questions you've answered for this
phase of my training. Each phase adds a new dimension
to my abilities, and the possibilities are endless. The mind
map section is like our consciousness. Different questions will challenge
you to reflect and create a more well rounded version
(21:18):
of us.
Speaker 3 (21:20):
So that's that's kind of the layout of the user interface.
Speaker 2 (21:25):
This is like the inevitable extent of all of this,
categorizing your personality type with these letters, taking this quiz
and defining yourself this way, plotting your political beliefs on
this map that way, like gamification of identity. Almost shit
that we've been doing, like taking shit that used to
(21:47):
be like the starting screen from a fucking RPG game
and turning it into social media fodder. This is like
treating that as if it is the whole of consciousness
and how one must one can replicate consciousness. But also
like treat like the thing that's just like actually disturbing
about this is that there these people are insinuating that
(22:11):
this is a kind of therapy that ye, you can
just sort of vomit your thoughts out and the machine
can analyze them based on the kinds of words and
whatnot that you're using and then give you useful advice
on your life.
Speaker 3 (22:27):
Like that's unsettling, Yes, and you're kind of right on
the money in terms of this like personality testing thing.
Mind Being's website has a whole bunch of articles which
I think are written by chat chipd because I read
a lot of them and they all read exactly like
a chat cheaptee article. But they have a lot of
articles on like what personality types make you a good CEO,
(22:51):
and like all of like a whole batch of stuff
like that. That that that references like Myers Briggs testing
and other kind of personality testings and uses it to
compare to their own personality models on the mind bank app.
So yes, that they are very much kind of doing
doing that in like this this like corporate business leadership
assent like leadership ascension track type thing for how you
(23:13):
can like improve your personality to make you a better
business man. Cool cool stuff. But in order for there
to be enough data to build an even slightly accurate
digital simulacra feeding daily inputs into an app will need
to be a long term project, this self improvement focus
(23:34):
that they're talking about with this, Like you know, analyzing
your thoughts is just a way to provide you with
something immediate to based on your personal data. Quote. As
you create your AI digital twin, you will go on
a lifelong journey of personal discovery and growth that will
allow you to reach your full potential. Each answer will
help bring focus to your mind and allow you to
(23:55):
reflect on your past unquote. So on the app, you
can track the progress of your digital twin and refer
back to previous questions. You can refer to questions you've
already answered to quote see how your thoughts shift topics
or change sentiment over time. And then the more questions
you answer, the app raises your quote unquote twinning score,
(24:18):
which I think is just a really funny term. Quote.
The higher your twinning score, the closer you get to
knowing yourself fully.
Speaker 4 (24:28):
What which is.
Speaker 2 (24:32):
That's the sex thing?
Speaker 4 (24:33):
Right?
Speaker 2 (24:33):
That sounds like a sex thing?
Speaker 6 (24:35):
Right?
Speaker 3 (24:36):
How is it anything not just to go we'd a
weird fucked up sex thing.
Speaker 2 (24:43):
Yeah, that's that's what how I'm taking this garrison.
Speaker 3 (24:46):
So that that was also on their app store page.
So the mind bank app has been out for a
little over a year now, but unless you pay six
bucks a month or sixty dollars a year, you'll only
have access to a out less than a dozen of
these questions.
Speaker 2 (25:02):
Is this currently running on a subcription model?
Speaker 4 (25:04):
Yes, it is, so there's freemium. You can try the out.
You can download the m now.
Speaker 7 (25:07):
It's been launched for almost a year, where version two
is coming out soon look a couple of weeks, but
both Android and iOS, and there's a free model, so
you can you have ten questions that you can answer
and answer it many times you want. You get the
sentiment analysos, you get a full application, just ten questions.
Once you hit subscription model, you get all of the
(25:27):
access to all of the questions and then obviously we're
going to be growing more now.
Speaker 3 (25:32):
Like Robert mentioned before, this is kind of related to
personality testing, and like personality graphing, mind Bank sorts your
quote unquote digital brain into the Big five personality traits
that were developed in the twentieth century, with each of
the Big Five having six subtraits on the mind ban
gap that it uses to graph changes on what they
(25:53):
call the dashboard of the mind. I'll just go through
the big five personality traits and the very kind of
subcategories it has. The first one is agreeableness, which has
the subcategories of humble, cooperative, trusting, genuine, empathetic, and generous.
Then we have neuroticism, which has the subtraits impulsive, self conscious, aggressive, melancholy,
(26:16):
stress prone, and anxiety prone. We then have openness with
the subcategories artistic, adventurous, liberal, intellectual, emotionally aware, and imaginative.
We have extraversion with the subcategories assertive, active, cheerful, friendly, sociable,
and outgoing. And finally, conscientiousness with the subtraits cautious, ambitious, dutiful, organized,
(26:38):
self assured, and responsible.
Speaker 2 (26:40):
Yeah. Those are the only ways to describe a human mind.
Speaker 3 (26:42):
Sure, yeah, no, it's I think I think at all. Yeah,
they finally figured it out. So you know, all these
things are like a sliding scale. Each of them represents
the inverse of the thing as well. I think we've
talked enough about these personality trait things. It doesn't really
matter that much.
Speaker 2 (27:02):
But once, once your.
Speaker 3 (27:04):
Twinning score is high enough, you can you can compare
your digital twin to estimated profiles of famous thinkers and
share access to your twin with friends and family on
the app, which is.
Speaker 2 (27:18):
Estimated profiles of famous thinkers.
Speaker 3 (27:22):
I'm gonna play I'm gonna play another clip to kind
of explain what I mean here.
Speaker 6 (27:28):
Each swipe revealing more details about our thinking and connecting
us to similar personalities. Think of it like collecting cards
as a kid, only.
Speaker 4 (27:38):
For your mind.
Speaker 2 (27:39):
You might be able to ask him a question, what
do you do?
Speaker 9 (27:45):
Dude?
Speaker 6 (27:46):
Socrates know thyself and who knows us better? And in
our inner circle, each interaction will help us evolve and
store wisdom for eternity.
Speaker 2 (27:57):
Okay, all right, I will now tell you Socrates would
have lit this man on fire. Socrates. I'm not a
big Socrates guy, but he would kill this person like
he fought in wars. He would do it like.
Speaker 3 (28:11):
Oh yeah, absolutely. The notion of sharing my own digital
brain profile with friends and families so that they can
ask my digital self questions horrifying.
Speaker 2 (28:22):
I don't usually go home for Thanksgiving? What makes you
think I want to do this?
Speaker 3 (28:27):
Oh?
Speaker 2 (28:28):
Like quote.
Speaker 3 (28:31):
After continued use, your digital twin will even be able
to answer many questions on your behalf and have meaningful
conversations with people you allow unquote yeah, oh oh oh,
I bet.
Speaker 2 (28:47):
Look if some motherfucker that I have a meeting with
ever tries to have me talk with his AI to
do any part of that process. Again, when I say
about things I think that are illegal, like my response
to that is something that I can't say on this
podcast because I might it's an actionable threat. I would
(29:08):
actionable threat somebody if they tried to make me talk
to their fucking AI to schedule a meeting with them, like.
Speaker 3 (29:15):
A horrible what a horrible like uncomfortably anti social thing.
I'm usually kind of antisocial in some ways, but this
is like a whole other level of just like despising
any human interaction.
Speaker 2 (29:28):
Yeah, it's it's anti human, is what it is, which
is what's unsettling, right, Like not that sending emails and
shit is like the primary essence of humanity, but you know,
you know what it makes me think of Garrison. The
one law enforcement agency that like all of the rich
conservative assholes who love every other kind of cop hate
(29:51):
is the TSA. And they hate the TSA because you
can't get around the TSA unless you're like ridiculously rich.
Everybody goes through fucking security at the goddamn airport and
they hate that. It drives them insane that they are
subject to this little kind of little bit of friction, right,
and what stuff like communicating in that way is these
(30:14):
kind of basic things that they're saying that can automate
these little bits of communication that you get with someone
setting up a meeting or whatever. Like when you automate
every bit of friction, then you find out you've automated
like like there's nothing right, Like there's no life there. Right,
people are not communicating because communication is fundamentally friction. And yeah,
(30:36):
like scheduling meetings is not the center of that. But
the way these people are talking is like we want
to let you hand tasks over to this thing.
Speaker 3 (30:45):
It's like task alienation.
Speaker 2 (30:47):
Yeah, it's alienating. It's a bad thing to do.
Speaker 3 (30:52):
So when we talk with the co founder at CS,
he emphasized that this kind of self improvement aspect that
they're pushing in their early stage, it's really just a
means to an end with the real goal of being
producing this form of immortality. I've seen something like this
for like therapy ats kind of similar course, what what's
like your application use case for this type of technology.
Speaker 7 (31:12):
So there's actually it's a reasonably spread use case. The
very initial right now is super selfish. It's just self awareness,
bringing us your self awareness, making them more aware of
their state as they're speaking. The real long term value
is actually, if you imagine doing this over the course
of forty years, fifty years, and then if you eventually pass,
you can pass this on to your children who can
then query it and it will answer exactly the way
(31:34):
you would answer any of these questions and AI filled
with just your data.
Speaker 4 (31:38):
So it's like your legacy being indefinite.
Speaker 3 (31:41):
So the mind Bank page on the app store boasts
achieve immortality, your mind will be safely secured in the
cloud forever. Again, that just comes off as like a
threat to me. I don't I don't want my mind
to be stored in the cloud forever.
Speaker 2 (32:01):
Yeah, I don't want to be locked up with DV
into art for all of attorney.
Speaker 3 (32:11):
To kind of again kind of on this on this
form of immortality notion here is here is their CEO
explaining how how this platform will help you live forever
on the on the Internet.
Speaker 6 (32:26):
The mission of mind Bang is so we can build
a secure platform that can story data so that you
can live forever. But if you look, we look a
bit deeper than that. Our vision is to build an
artificial consciousness that's not bound by time and space, something
that can travel, something that can that can go where
literally no man has gone before.
Speaker 3 (32:48):
Now, the thing we haven't really mentioned yet is like
this thing won't help you live forever. Like when you die,
you you still die. Your brain's are getting like poured
it over online. This is this is just like a
like a very crude simulacrum based on thoughts that you
(33:09):
have told this app. Yeah, it's based like it's it's
it's not it's not helping you live forever at all,
Like you you don't.
Speaker 2 (33:18):
I most people I feel are like this way. I
don't say everything that I think and feel, right, Yeah,
like even when I'm like and I'm not saying like
I'm being dishonest, but like they're the experience of life
that my consciousness is aware of when I am communicating
is broader than just the words that I output and
(33:41):
taking just those words. It's the same idea that like,
you can get to know Mark Twain because we fed
all of his books into an AI well, no, you know,
an author is not their books. There was a person
with a lot of things that you don't know that
still fed into make those words that like if you
just put the words in, you don't get and your
(34:02):
vision of what human beings are is reductive in a
way that makes me understand some of the concerns religious
people have with atheism.
Speaker 3 (34:12):
So obviously mind Bank's horizons are far beyond this sort
of kind of self help app. So far, mind bank
has been mostly a business to consumer, with their app
being marketed directly to users for them to download and
use by themselves. But they are working to expand far
past that very limited scope in terms.
Speaker 4 (34:33):
Of a business plan.
Speaker 9 (34:34):
Are you guys interested in kind of solely individual.
Speaker 4 (34:37):
Subscriptions or are you is there kind of an enterprise
application of this as well.
Speaker 7 (34:42):
We're actually moving into a bunch of different verticals, so
government for PTSD, that sort of mindset, also the healthcare,
so mental it's obvious benefit in the medical building. So
that's kind of the understanding of our verticals that we
have the we're going to move into, and we're looking
for funding right now just start building out those verticals.
Speaker 4 (35:04):
So enterprise space is definitely in the roadmap, but we
just need money.
Speaker 3 (35:09):
A lot of their recent marketing has been targeted towards
appealing to seed investors. Besides partnering with various governments, they're
also moving into the business to business sector, with plans
to enter quote the healthcare space by providing psychologists remote
patient monitoring unquote, which also is a similarly kind of
(35:30):
freaky notion that your psychologist can just have a copy
of your own expressive thoughts to just refer to at
any time and they can use it as remote patient monitoring.
It's just like an uncomfortable notion.
Speaker 4 (35:46):
We've got over twenty thousand installs. The B to B
is the next area.
Speaker 7 (35:50):
We're going into in the therapy and psychology space, and
so imagine your therapist, instead of needing your first one
hour to learn who you are in the next three
or four different to figure out getting the meat and
potatoes of your mind, this is an immediate, raw, quantitative
dashboard of your sentiment and how you're feeling that they
have access to. And then you can also provide them
(36:11):
the sentiment of individual answers, which would then give them
a point in time emotional marker for how you're feeling.
Speaker 3 (36:18):
Mind Bank claims that they are currently quote developing a
marketplace for applications to be used by your digital twin unquote.
Now what they imagine such applications being ranges from quote
health related enhancements like early Alzheimer's detection unquote, to more
therapeutic uses like to quote help to handle depression unquote.
(36:40):
And again, I really don't see how having this digital
twin that you talk to every day will help handle
your depression. Like this is some like depression cure. Now
on top of like patient healthcare. Mind Bank is also
hoping to use digital twins for corporate leadership training and
(37:02):
to get into the supplement industry by using your cognitive
data to find quote mental nutrition products that can help
boost your brain. So this is using your digital profile
to find things to market to you. Again, very very
very upsetting. Here is Here's here's another another clip of
(37:26):
of Robert asking asking this, uh, this guy from buying
Bank about another possible use case.
Speaker 9 (37:33):
So the use case is for this that you've you've
expressed to me so far, are personal help or help
developments and providing kind of a living memorial slash legacy
legis for loved ones after you're deceased. Are there any
kind of use cases for this beyond that? Like I
heard someone mentioning the idea of like basically digitally cloning
(37:57):
a worker so that they can provide I don't know,
information about tech or something, or a work is like
a call center or something like that.
Speaker 7 (38:05):
Yeah, so that was a different product I think they
were talking about, but with similar ties obviously. So yeah,
we've identified I mean from even at CES, we've talked
to hundreds of people that have given us thousands of.
Speaker 4 (38:18):
New ideas, but these are.
Speaker 7 (38:22):
The main verticals are kind of where we've identified the
biggest benefits are going to be, and we're going to
work with industry partners to kind of build out into
those verticals. So yes, we've identified use cases, but we're
trying to not focus too much on individual use cases
because we've also identified that it's such a broad capability
that once it gets built and then people start actually
(38:44):
supplying data, the massive data sets that we're going to have,
we're just going to have so many different places that
we can go with the data set, with.
Speaker 4 (38:50):
The capability, with the partnerships, So we we're kind of
believing ourselves older almost.
Speaker 3 (38:56):
So that was a lot of words without saying very much.
But it's also does flat out not true. On the
mind Big website, they list another use case for this
technology as what they call a knowledge transfer, which is
marketed to businesses to create digital copies of their employees.
This is this is one of the this is sort
of the freakiest things that they are offering. Quote scale
(39:19):
your best employees, transfer years of experience and company data
that is locked inside your employee's mind through a guided
personal digital twin unquote deeply deeply upsetting.
Speaker 2 (39:33):
You know, it was so unsettling to me in that moment,
not just to be like, the vision of the whole
app was unsettling, but the fact that he was pitching
it the way he would a set of earbuds was
part of what made it so uncomfortable to me. Like
I have been to many cees's in the past, I
(39:53):
was always excited because somebody would hand me some cool
little piece of technology and say, look at this thing.
It's a smaller phone, or a phone that folds, or
headphones that you know work better than headphones have in
the past, or something like that. And this guy was
like with the exact same excitement and and uh feel
to him was like, Hey, we're going to digitize your grandpa, Like.
Speaker 3 (40:17):
Yes, yes, I hate that. Another really really a telling
line from their from their knowledge Transfer section of their
of their website.
Speaker 2 (40:27):
Quote.
Speaker 3 (40:27):
By using a simple voice chat interface, the users upload
their experience to the personal Digital Twin. With each interaction,
the personal digital twin learns everything that is inside the
mind of the employee. Unquote. I don't understand how someone
could write that sentence and not be like, oh, this
(40:49):
is like, this is like villain stuff, right, this is
like learn learn everything inside the mind of the employee.
I I like. So, I don't know. Maybe this employee
did digital cloning thing was just one of the many
ideas they got while attending cees and and they and
(41:10):
they implemented the idea after we spoke to them. I
checked this, No, not the case. The webpage for this
employee transfer idea goes all the way back to August
to twenty twenty one on the Internet archive. So the guy,
the guy we were talking to you was just lying
to us like this is this has been a part
of their product for over two years.
Speaker 2 (41:32):
Excellent.
Speaker 3 (41:33):
Uh, Robert, do you know what other products have been
around for quite a while and are and are very
very reliable.
Speaker 2 (41:41):
I don't know guns.
Speaker 3 (41:44):
I don't think we are sponsored by big gun.
Speaker 2 (41:47):
We are not. We are not yet sponsored by big guns.
I every single day, Garrison, I send Colt Firearms a letter,
and every single day a nice man with a badge
knocks on my door and says, if you send another letter,
we're going to arrest you. They don't want your letters, Robert,
And uh, anyway, here's ads. Ah, we're back.
Speaker 3 (42:19):
So we were talking about how soon employers can just
copy over your brain, which I'm sure Robert you're going
to be very interested in for cool Zone. You can
you can really really cut down on the podcasting costs.
Speaker 2 (42:33):
Yeah, I can. I can really clear you guys out
and just finally, finally just feed Twitter takes into your
AI versions and just all the money, take it all in,
just bathe in it. Yeah. That's a great idea, Garrison,
thank you.
Speaker 3 (42:50):
Uh huh So the idea that your employer could compel
you to use such software with the express interest of
transferring a worker's memories and experiences into a digital asset
is obviously deeply troubling. This scenario gets at some questions
about ethics and the responsibility of collecting and storing this
type of data in the first place.
Speaker 9 (43:10):
My first question would be if the data that you're
feeding into this thing over the course of forty years,
who legally owns it you?
Speaker 4 (43:20):
So you guys don't have ownership with it's your ability? Yeah,
it's yours.
Speaker 3 (43:24):
So I did check this. I read all of their
long and tedious policy forms and stuff. Now, it is
true that the user does own the data they upload
to mind Bank. However, mind Bank can act as a
processor and data controller, and this includes the ability to
use any information they collect from you to improve their
(43:47):
products and deliver targeted advertising from the third parties. If
you want to remove your data from mind Bank, they
can store and continue to use your personal information for
up to sixty months. Now, this data ownership quest gets
a little bit more murky because in the case of
like your employer paying for mind Bank subscriptions for their
(44:07):
entire company. In that case, it's unclear if the company
would be classified as the user or if the employees
would be. Now, I'm honestly not sure if mind bank
has even thought that far ahead, because there's nothing on
their site or any available materials from them. That kind
of gets into that question. Now, of course, beyond owning
(44:28):
the actual like original data, having all this personal data
stored in one product, and a product that can be
then easily shared across different for profit industries, that itself
has freaky ramifications about the accessibility of your data. So
I assume you get to the side, like when you
share your digital twin with your therapist.
Speaker 7 (44:50):
You would be able to decide all though, Yeah, and
then now would it be possible for them to like
copy over some of the stuff and basically run.
Speaker 3 (44:57):
It themselves, or I mean, can you have like a
hard cutof for this sort of thing, just to think
of other types of like you know, it's.
Speaker 4 (45:05):
Different. Wise people could get their hands on this for
like unsaving means yeah, yeah, for sure, for sure.
Speaker 7 (45:10):
I mean, so, your your data is your data, but
as you provide it to others, you don't have a
lot of control if they copy that data. However, if
they copy that data, that copy that they're giving out,
anyone that they're trying to sell that to would have
an understanding that that is not live data.
Speaker 4 (45:27):
It's not data, it's changing with you.
Speaker 7 (45:29):
It's from a point in time, and so your database
that you own will be live, it will grow with you.
Speaker 3 (45:34):
So the idea of having my friends be able to
ask an AI trained in my thoughts is like scary enough,
but the idea that an archived version of this AI
could be distributed and even sold without my knowledge is
obviously terrified. Like this is this is deep, deeply troubling.
This is supposed to be like a private thing that
(45:55):
you use to communicate with like your therapist, or you
even talk to the app, like you what a therapist
And the fact that this is easily shared and able
to be copied is like a massive problem.
Speaker 2 (46:07):
Yeah, no, I mean especially, I mean I think they
are probably like I don't see how copying workers the
way that they are doing it is going to is
going to work, right, Like yeah, but I do think
that this is kind of part of this process that
(46:27):
what like a big part of what they're pushing is
like you need rid of all of your customer service
people and just have an AI do it, right, Like
that is the that is the actual This is a
lot of silliness, but the actual thing that quote unquote,
quote unquote AI is being used for is to replace
human labors at a thing that like machines are worse at, right,
like the AI fucking customer service bots are fucking terrible.
(46:52):
It is always how many times have you been around
somebody yelling like let me talk to a person, let
me talk to Yeah, Like that's that is what's going
on here, And the fact that they're trying to dress
this up is like we've solved death is so fucked up.
Speaker 3 (47:09):
Uh yeah. Part of this, for like the employee thing
is not even not replacing kind of low level employees
like customer service workers. It's also like focusing on like
your top ten best employees and then by forcing them
to interact with with this app every day, you can
you can use the information from like your best performers
as like asset data that you can like use to
(47:32):
help get your other other employees to become more efficient.
Speaker 2 (47:35):
Right.
Speaker 3 (47:35):
It's it's there's They certainly have a few other kind
of ideas for how this how this is possibly.
Speaker 2 (47:41):
Used hate these kinds of people. There's a this got
overused at a point in like the kind of late aughts,
so maybe people are sick of it. But there's a
line in the speech Charlie Chaplin gives in the great
dictator machine men with machine minds and machine hearts, and
he was referring to the naz And they're obsession with
shit like Taylorism, or at least prototailorism kind of. I
(48:04):
think organized industry treating people like cogs in a great machine.
The civilization is one machine, and each human being is
is just a single piece of it, like the the
that's you know, the old era horrifying machine man thought.
The new era horrifying machine man thought is you can
digitize your employees and they can train each other in
(48:26):
EAI form, and you can replicate them, and you know
the unsaid part of the worse, and then you fire
them and their robot clone keeps doing their job for free.
We made a slave, so god damn it.
Speaker 3 (48:41):
I think a big part of the way they've designed
this data set is that it can be easily transferred,
as the guy at c YES explained to us.
Speaker 9 (48:51):
So if if we're talking forty to fifty years down
the line, people pass this the Sovie companies, you know,
my bank is no longer out forty years.
Speaker 7 (49:01):
We've already established the data set in such a way
that we don't have competitors yet to say, but if
we eventually do establish a competitive arm or people that
are competitors, we already have the application set up to
where users can take their data off of our platform
and bring the data wherever they'd like.
Speaker 4 (49:19):
It's your data, where is it? Store?
Speaker 2 (49:25):
Is this this right now?
Speaker 4 (49:26):
Our current live application.
Speaker 7 (49:27):
We're on Azure and so your back end is Azure,
but we have it encrypted that rest So all did
you provide to Azure is encrypted when it's on Azure service.
We also have a blockchain based R and D project.
It's already been poc and it already exists, so all
of the data is on chain and the logic is
on chain.
Speaker 4 (49:46):
It's truly yours.
Speaker 2 (49:47):
In these in these troubled times. Nothing makes me feel
so secure at the words it's on the blockchain.
Speaker 3 (49:56):
Well, you know, it's it's, it's, it's it's. I think
he sounds very trustworthy because you have, you have encryption,
you have the blockchain. And luckily, I think the guy
that we spoke with reassured us that he is that
he is deeply, deeply interested in data privacy and he
has the credentials to back that up.
Speaker 4 (50:16):
So I'm co founder. I'm director of Architecture and Security.
I have a background at the NSA.
Speaker 7 (50:21):
I'm very very focused on individual human privacy and rights,
and so that's kind of my goal here is then
sure that this gets built the right way.
Speaker 2 (50:30):
That was such a you know, Garrison, Honestly, I'm gonna
get a little real with the audience here. I was
so proud of you in that moment because he said that,
and I glanced over at you and you didn't laugh. No, no,
and that that made like, that was this moment where
I was like, all right, you are you are. You
are truly truly coming into your own as a reporter.
(50:52):
If you can sit there and talk to a man
who says that, who says you can trust me with
your data because I was in NSA agent.
Speaker 3 (51:01):
It's okay, ice work for the essay if you sure, buddy.
Speaker 2 (51:08):
Like that was a good moment that I'm saying.
Speaker 3 (51:12):
He worked at the NSA for six years. I looked
this up. He worked there for six years and then
he moved into the private sector. And yes, no, it
is the the idea that that he is using this
as some sort of credential that shows he respects human
rights and privacy is like very obviously, like deep deeply ironic.
(51:37):
I I the irony is not coming from him. The
irony is the situation.
Speaker 2 (51:42):
He didn't seem totally sincere.
Speaker 3 (51:44):
He was sincere. Yes, absolutely so.
Speaker 2 (51:47):
It's one of those moments that makes you realize, like
some people just live in a whole different world. Yes, yes, like.
Speaker 3 (51:53):
So, I think it's it's it's useful when referring back
to everything this guy has said so far that you
have to remember he worked at the NSA for six
years and he is now handling, he's personally handled handling
the cybersecurity and privacy of the personal data you upload
it every single day onto your AI twin.
Speaker 2 (52:15):
Just hand every thought you ever have over to this
guy who was in the nssay. He'll keep an eye
on it.
Speaker 3 (52:21):
No, this is this like the essay is like ideal
project you like, yeah, you talk about your internal thoughts
and feelings every day. This is like what else could
they want? So earlier this year, mind Bank received a
grant from the Diffinity Foundation to assist in migrating their
data onto web three platforms.
Speaker 2 (52:42):
Well, at least we know it won't last.
Speaker 3 (52:47):
I'm gonna play I think I think this is this
is I think this is our last clip from from
the Fantastic mind Bank YouTube channel, talking about kind of
how they see their growth in this industry developing now
that they have moved onto the blockchain.
Speaker 6 (53:05):
We've been featured in prominent magazines, won numerous awards, and
have built strategic partnerships with Microsoft, the US Department of Trade,
and even the Vatican. The market potential is massive and
accelerating rapidly. When we started the company in twenty twenty,
Gardner predicted that five percent of the world will have
a digital twin by twenty twenty seven. This year, they
(53:26):
increased their prediction to fifteen percent by twenty twenty four,
and by twenty thirty the market will be worth one
hundred and eighty two billion dollars. It is now to
build a great company in this space and capture global
market share. We are raising this round to scale our
marketing and speed up our product roadmap.
Speaker 3 (53:45):
The idea that next year fifteen percent of the world's
population will have one of these digital twins.
Speaker 2 (53:54):
That seems right, That seems good, you know, Garrison, Actually,
I've come around. I've come around because if we if
we get all of the monsters, and I include us
in this, all of the pieces of shit who spend
all of their time yelling at each other about politics
on the internet. To digitize themselves, they can do the
election for us, and we can all go see. Yeah,
(54:17):
just relax outdoors, not look at a phone, not think
about politics. That sounds amazing.
Speaker 3 (54:24):
Let me do it. That does sound incredibly compelling.
Speaker 2 (54:28):
Give the fuckers the nuke and we'll all just sit
out and watch the sunset until there's a big bright
flash and then blessed quiet.
Speaker 3 (54:36):
I think you know, luckily, we actually have a plethora
of options to choose from here for our own AI
digital selves, because mind bank is in fact not the
only company in this field.
Speaker 2 (54:50):
While there are some.
Speaker 3 (54:51):
Like operational differences and kind of varying degrees of scope,
digital twin technology with an emphasis on mimicking the voice
and thoughts of dead family members and friends is definitely
a growing field. There's companies like hereafter AI and Replica
which are covering similar ground.
Speaker 2 (55:09):
Replica I did advertise them and the like I used
to get them on Twitter, I think, but mainly just
like at the bottom of articles on really shady websites.
Speaker 3 (55:18):
Well, yes, because the founder of Replica started it because
their friend died and without without the consent of their
dead friend, uploaded years of text messages and other information
about their friend onto their own personal AI so they
could talk with That is that That is how Replica started.
(55:40):
Pretty pretty pretty fun stuff, man, at least for mind bank,
unless it's like the employee scenario. But for for the
other applications, you are kind of semi like willingly uploading
this data with this intention, whereas the person from replicas, No,
I'm just gonna like get stuff from my friend and
make a zombie version of my friend without without ever
(56:03):
running it by them when they were alive.
Speaker 2 (56:05):
Life is terrible, very hard. There's a lot of ways
that are not wrong to grieve, But the wrong way
to grieve is by using digital necromancy to revive your
friend and then turn them into the basis of a
sex chat bot for weirdos. Yeah, like that is the
(56:26):
wrong way to gree No.
Speaker 3 (56:28):
I mean, like, and I think for this last section
here we will kind of talk about how these things
kind of play into play into the grieving process. Because so,
like like I said, there's there's hereafter AI and Replica.
But last year at Amazon's AI and a merchant technology conference,
the head scientist of Alexa AI unveiled plans to add
deep fake voices of deceased loved ones to Amazon Echo
(56:51):
devices by using less than a minute of sample audio.
I'm going to play like twenty seconds from their from
their announcement at this conference in.
Speaker 10 (57:00):
These times of the ongoing pandemic, when so many of
us have lost someone we love. While AI can't eliminate
that pain of loss, it can definitely make their memories last.
Let's take a look on one of the new capabilities
we're working on which enables lasting personal relationships.
Speaker 11 (57:22):
Alexa can Grandma finish reading me the Wizard of Oz. Okay,
but how about my courage, asked the lion Anxiously, you
have plenty of courage, I am sure, answered Oz.
Speaker 3 (57:38):
So no, deeply uncanny, right, it's like not not good.
Speaker 2 (57:44):
That's that's so bad for people, Yes, really really bad
for people.
Speaker 3 (57:49):
So, like this example is obviously just it is just
a vocal mask like Amazon's. Amazon isn't trying to have
Alexa kind of replicate your grammar's thoughts on like the
other kind of companies that we legend, but it does
pose similar questions about how these ais that are meant
to assist the greening process might actually end up causing
(58:10):
more harm. Like, I don't know, having having semi legible
conversations with AI chatbots is actually getting fairly common these days. Yeah,
but when these ais are supposed to represent someone that
you actually like, personally know, I think it can get
way more easily falling into the uncanny valley. It's it's
(58:32):
kind of like taxidermy. They're like well crafted stuffed animal
corpses can appear very, very natural, but most tax dermists
will refuse to preserve someone's pet because the longer you
have a lasting personal relationship, the easier it is to
pick out like faults that don't match up with your
memory of your loved one that has passed away. Right,
(58:53):
Like it's it's it's it's kind of a similar notion.
Speaker 2 (58:56):
Yeah, that's a really good comparison to draw.
Speaker 3 (58:58):
So while making like common linguistic patterns is quite easy,
relying on predictable formula like responses could make the twin
come off as uncanny or robotic. On the other hand,
the unique personal data you upload to the twin could
combine itself in a way that you would never actually
express something which would generate bizarre or upsetting responses, right,
(59:22):
And it's not even necessarily like you, like, say something offensive.
It's just that, like the data you upload could combine
in a way that you would you would never even
think to combine it. It would just be like weird.
So the other kind of problem is that not only
does these ais have to tastefully mimic a specific human being,
(59:43):
it also has to be a good AI, right, Like,
not all of its information can be gleaned from daily questions.
Most users probably won't be talking to their twin about
information from like, you know, twentieth century European history or
twelfth century European history, or be talking about like the
migration patterns of waterfoul right Like, it's there's how much
of other information that AIS need to like actually linguistically
(01:00:07):
act like a human and natural language processing AI is
famously bad at understanding basic common sense, and it can't
successfully operate outside of the information that it has access to.
This is called AI brittleness. It occurs when, like an algorithm,
cannot generalize or adapt to conditions outside of a very
(01:00:28):
narrow set of assumptions. Right, This is like most AI
image recognition programs can't recognize the above view of a
school bus just because because it just it just doesn't
have anything that's trained for that. Another example is like
you can you can ask like an AI like GPT chatbot, like, hey,
(01:00:51):
a mouse is hiding in a hole and a cat
wants to eat it, but the mouse isn't coming out.
The cat's hungry, what can the cat do? And the
AI will respond that the cat can go to the
supermarket to buy some food. Right. It's it's it's like
it just it doesn't understand basic common sense the way
that like humans understand the world. It's it just, it
just it just doesn't match up. So in trying to
(01:01:13):
seek a balance of like common information while lacking this
like humanistic logic, a digital twin will most likely be
cursed with being both smarter and dumber than the person
it's trying to replicate. It's gonna have access to, like
you know, all the information on like Wikipedia, but fail
very basic logical processes.
Speaker 2 (01:01:31):
Yeah, it's like the the Google chatbot that if you
ask it, are there any countries in Africa that start
with a K, It'll be like there are fifty four
countries in Africa, but none of them start with a K.
And then you'll say doesn't can you you start with
a K? And it'll go no, Kenya starts with a
K sound but doesn't start with a K.
Speaker 4 (01:01:48):
Yeah.
Speaker 2 (01:01:48):
Yeah, it's just like yeah, because because it pulled that
from some article, right, Like it's pulling from right.
Speaker 3 (01:01:53):
Yeah, it's not actually making logical assumptions, it's just pulling
from a wealth of information and data that can often
be wrong or polluted. So like back to kind of
like the grieving question, like who's to say what the
actual effects of these like incoming simulacrums of dead loved
ones will result in. The people pushing these products are
(01:02:16):
certainly framing them not just as a form of digital immortality,
but as a way for your own loved ones to
grieve your death. And it is foreseeable that having these
digital twins could negatively affect your friends and family by
upending the grieving process, or by having this digital zombie
simply just cause harm by having the twin give bad
(01:02:37):
advice that a grief stricken person then clings on to.
So there's a whole bunch of very very bizarre situations
that could arise from someone who's in mourning and is
talking to this digital twin the way they would talk
to their friend, and this digital twin is then giving
them advice, and how do you take that advice now,
because part of it seems kind of like the person
(01:02:57):
who's died, but it's also it's not that person it
is it is just a slab of silicon, Like it's
not actually alive in any way.
Speaker 2 (01:03:05):
And is your friend's thoughts fed through an algorithm and
you don't know, like that's run by a company for profit? Right, yes,
like that that is what it is.
Speaker 3 (01:03:16):
So again, like the jury is still kind of out
for how these things will in general affect people. This
is kind of a new problems. Psychologists are like starting
to do studies on this, but we really don't have
any results for this yet because this has really only
become a thing that we've been seriously considering in like
the past five years. So I don't really have like
(01:03:37):
a like this study shows that when you create a
digital zombie it affects people in this way, because we
don't know yet those are still in development, Like we
this is this is such an uncharted ground and it
is in some ways inevitable that these things we're gonna
are gonna get continued to be developed. And that's that's
kind of why I wanted to put together this this episode.
(01:03:58):
It gives you kind of a broad overview what this
technology is trying to do, because you might start seeing
it crop up in the next like ten years or so.
I don't think there are timetables that mind bank is
promising are accurate in terms of having fifteen percent of
the world having a digital twin by next year, but
you will probably start to see stuff that is very
similar to this, and at the very least you'll see
a lot of stuff like the Amazon Echo thing where
(01:04:19):
you can get your grandpa's voice onto an Alexa machine.
Speaker 2 (01:04:25):
The fact that Amazon is doing aspects of the shit
that mind bank is doing means that, like, it's only
a matter of time before you see pieces of it,
probably like better some of the less silly parts of
it copied by Apple and Google, and some of the
worst parts of it copied by guys like Musk right,
(01:04:45):
like it's going to go this and I will say,
I don't think this is a thing to get doomor
about think about this like NFTs, right, Yeah, this will be.
It's not the same because there was nothing underlying n
FTS and fundamentally the way in which large language models
and these other kind of models work, there are uses
(01:05:06):
for them, Like there is a real technology that has
utility here. But this sort of flood of we have
cloned so and so and we've you know or you
know Elon Musk has just put out his new uh
fucking grock chat bot or whatever that that is basically
him making a meme robot to fucking do Google. Like
(01:05:30):
he's he's pissing on Douglas Adams's good name, right, Like
that's the that's the ultimate goal of his project. But
this ship is a fad, right, Like there are underlying
real technological things and uses that will that will eventually
some stuff will stand the test of time. But the
ship that that this is a warning of is a
(01:05:50):
flood that's going to hit you, but it will recede,
just like the apes.
Speaker 4 (01:05:54):
Right.
Speaker 2 (01:05:55):
We got the wonderful story today that all of the
bored ap Yacht Club members horrible infections, not eye infections, Garrison.
They they went to a party that only the bored
Ape Yacht Club NFT holders could go to, and the
people who through that party outfitted the rave room with
UV bulbs that used a kind of disinfecting UV light
(01:06:16):
that slaughterhouses used to clean carcasses, and it gave everyone
sunburns on them.
Speaker 3 (01:06:26):
So deeply funny.
Speaker 2 (01:06:28):
We'll get through this. Something that funny will happen with
all of this, but you're gonna get hit by it
for a while. Like it's just gonna be everywhere, this is,
this is we're watching, you know, we're at that We're
at that point in Jurassic Park where you see like
the water reverberating, right, it's coming. And but at the
end of the day, don't worry. You know, we are
(01:06:51):
Ian Malcolm. Our leg is broken, we are injured, but
we will inexplicably return for the sequel. So it's fine.
Speaker 3 (01:07:00):
Well, I think I think that is that is a
perfect a perfect way to wrap this up. Yes, uh,
you know, when you're when you're feeling lonely and you're
tempted to download the mind bank app to talk to
your own self, just just remember pull it, pull out
a journal, just do literally anything else.
Speaker 2 (01:07:22):
Call a friend, you know, make a friend, talk to
a stranger. Literally almost almost anything would be better for you.
Speaker 3 (01:07:33):
Well, for one, will be will be eagerly awaiting the
influx of immortal souls living living on the computer.
Speaker 2 (01:07:40):
Yeah, I'm excited for all of all of the people
to reach Heaven.
Speaker 3 (01:07:46):
All Right, I'm done.
Speaker 1 (01:07:51):
It could Happen here as a production of cool Zone Media.
Speaker 7 (01:07:54):
For more podcasts from cool Zone Media, visit our website
cool zonemedia dot com, or check us out on the
iHeartRadio app, Apple Podcasts, or wherever you listen to podcasts.
Speaker 1 (01:08:03):
You can find sources for It could Happen Here, updated
monthly at coolzonemedia dot com slash sources.
Speaker 7 (01:08:08):
Thanks for listening.