Episode Transcript
Available transcripts are automatically generated. Complete accuracy is not guaranteed.
Speaker 1 (00:01):
You are listening to the Billy d'es podcast.
Speaker 2 (00:11):
All right, well, hello everyone, and welcome to the program.
I am absolutely thrilled that you are here. If you've
never checked us out before, we are primarily an interview
and a commentary based podcast. Today we are live a
special edition of the podcast live on Saturday Night with
(00:31):
me is the Wonderful, the Intelligent, and the beautiful Cynthia Elliott.
How are you doing?
Speaker 1 (00:37):
Wow? Billy? Can I pay you extra for that?
Speaker 2 (00:41):
Thank you? You're very welcome.
Speaker 3 (00:43):
I'm did great, excited to see you, Excited to be
doing a special episode.
Speaker 2 (00:47):
Yes, yes, what we're going to be talking about today's AI.
And I got to say this is kind of Cynthia's
brainchild in terms of what we're doing tonight on on
the program. A lot of things that I've run across
a lot of podcasts and other things about AI have
been very macro talking about technology in general, whether or
(01:13):
not it's going to become a situation where you know,
a computer takes over the earth, and you know, those
are interesting conversations for sure. But we're going to cover
actually more than a half a dozen things on today's
program that we might be speculating a little bit about
some of them, but for the most part, these are
(01:35):
things that are very plausible and not in twenty thirty five,
within the next Yes, yeah, I mean this is moving forward. Okay,
These things are really happening, and there's a lot of
layers to this. There's the technology side, which is somewhat interesting,
(01:57):
but then there's other sides to them, which which are
breaking new ground in terms of ethics, morality, how our
society functions. We're kind of getting into a new thing here.
I particularly am not alarmed by a lot of the
stuff we're going to be talking about today. This isn't
(02:18):
a program to scare people. This is just a program
to let you know where things are going. And Cynthia,
I have to say, first of all, you've been on
top of this AI thing for a long time. It's
been long before it was popular, and I followed it
to a degree. But I think the one to mention
that you bring to it is what I would it
(02:39):
be correct to call you to refer to you as
a spiritualist? Would that be? Would that be correct? Is
that the proper term?
Speaker 1 (02:47):
Yeah?
Speaker 2 (02:47):
Okay, I think one of the things that you bring
to it is you have a humanity, You have a
humanistic feel about where things are going, and I appreciate
that about you. So you kind of have the knowledge
of where some of the technology is going, and you
also have that other dimension which I really appreciate. So
(03:10):
we're going to talk about some of these technologies and
we're going to get right into it. So here we go.
The first one is the Google Feeling System, and this
is one I was not aware of. Briefly is this
is an AI trained technology to recognize and respond to
human emotions that could improve therapy, bots, healthcare support, and
(03:35):
customer service. The risk is the min risk of manipulation.
Corporations could exploit emotional AI to push purchases or political
opinions and some other things. Let's start with that first.
Let's talk about a little bit more in your words,
what is the Google Feeling System, Let's start there.
Speaker 3 (03:55):
Yes, basically giving artificial intellient legence the ability to better
understand the emotions and feelings of humans so that they
can be more effective at everything from treatment to being
you know, assistance to advising them. And you know, I
understand why they want to do it. My concern is
(04:18):
that technology players, particularly the big boys have consistently shown
disregard for the unhealthy ways in which technology can be used.
And it's difficult to imagine. I mean when you look
at someone like so the Google is doing this, but
you know, once somebody like at Google does this kind
of technology, it ends up hitting you know, all the
(04:39):
big players want to go after similar types of technologies.
And we've seen with Meta in particular, consistent uh saying
one thing and then doing whatever the hell they want,
and that that eroded our privacy over years. So for me,
when I look at AI, I look at how does
this benefit humanity? As a social theorist, I want to see, like,
how is this going.
Speaker 1 (04:58):
To actually be helpful?
Speaker 3 (05:00):
And I can absolutely see how if it understands thoughts
and feelings, then it can become a much better say
therapist and a person who can help identify medical issues
things like that. I get it, But how do they
not misuse that to take the information that they have,
which is very intimate information. If you're able to understand
(05:21):
all that, then you're able to measure it within Say, so,
say I have an AI robot friend, you know, what's
to prevent the information that he has on me? From
being used by these big tech companies to market to me,
and we know from the last twenty five years they
consistently do the very thing that.
Speaker 1 (05:38):
They say, oh, we won't do that.
Speaker 2 (05:40):
Yeah.
Speaker 3 (05:40):
Sure, And so that's the danger here. The beauty of
it is it's going to be it can be a
really great tool for health and wellness and diagnosis, but
it's also got the potential to allow too much information
to be in the hands of big tech.
Speaker 2 (05:54):
One of the things that you mentioned in your notes
is authenticity, and I'm not necessarily meaning authenticity in terms
of lying versus not lying, but the confusion that could
arise over real human empathy. Is the user going to
equate this technology with real human empathy?
Speaker 3 (06:18):
Yeah, And we're already seeing that as a problem with
AI as it already is, which is why when people say, oh,
that that's not going to happen. It's like, people like
me who are futurists don't base these things off of
just picking the ideas out of the air. It's usually
off of pattern recognition and understanding history. And just recently
we've had a slew of people get spiritual psychosis or
(06:38):
form of psychosis and getting crushes on AI, some guy
even you know, as AI to marry them. I mean,
we're already confusing what is real with what is not real.
Speaker 2 (06:51):
Yeah, I think one of the things I mentioned in
one of my programs is I don't necessarily have a
fear of AI becoming you know this this giant master
computer that takes over everything, although that's not, I guess
impossible in the grandest sense. But what I what I
am concerned about is so many people ask chat, GPT
(07:16):
and these other things about their everyday problems, to the
point where are we really being guided by a technology,
Like there's a problem in your marriage, you know, what
do I do about it? You ask chat GPT? Is
that really the source that you should be turning to
for these kinds of support? So, and it's that is
(07:37):
going to creep in. It's not going to be like
this one big day where the computer just takes over
like in a science fiction movie. But over time, especially
when younger generations who are already way more dependent on
this type of thing than what they should be. Is
this an artificial evolution? That would be one of my
one of my concerns And the other thing we have
(07:58):
about this we have noted which we kind of tapped into,
is when people I shouldn't say people, but a technology
that's being used on behalf of people that are hopefully
good people using it for the proper intent, but they
could not be and is lending them your emotional well
being your status? Is that a good thing to do?
(08:20):
So that's one of the things do you want to
expand on that?
Speaker 1 (08:23):
No?
Speaker 3 (08:23):
No, if they weren't for profit companies who are going
to go through you know, they're already going through an
adjustment right now because like everybody and their brother ran
to do a generative AI app or something like that,
and a lot of them aren't working out. But that's
kind of the part of the process, sort of a
normal evolution for technology. But they are for profit companies, yeah,
and for profit companies inevitably, I've seen this time and
(08:46):
time again. For profit companies may start out with good intentions,
but I have inevitably seen almost all of them become greedy.
And how do you we've already sold or if you
look at what you sign when you're like signing up
for these these apps or these chatbots, you're signing your
life away. You're giving them full access to all the
information that they're harvesting and unless you, you know, can
(09:09):
can come up with a way not to and it's
you're not going to be able to prevent them from, say, say,
some future venture capitalists whatever invests in the company and
starts pushing for them to make use of the information
they have. Look at look at as a good example
of this and why we should be concerned. Look at
the twenty three and me, Look what's happened with them.
(09:32):
They sought to monetize all of that data that they
told everyone would never be used that way.
Speaker 2 (09:36):
Yeah, yeah, oh yeah, it would never be used that way.
Of course, here's another one kind of similar. This one
isn't particularly alarming. We have one coming up down the
pike that a lot of people are going to freak
out about. Google Smart skin. The good part of this
is that electronic skin that senses touch could revolutionize prosthetics, telemedicine,
(09:59):
and v are immersion not virtual reality. Now, the idea
of touch, in particular in prosthetics is very important because
someone who lost a limb, for example, you can have
the manipulation of the mechanics has gotten to the point
where you can say, let's pick up a pen but
that person cannot feel the weight, the texture, whether it's
(10:22):
plastic or metal. They cannot feel that pen the way
a human hand can. And this now is kind of
changing that. Before we get into some of the cons
on this, I like it in general. I liked the
plastic surgery at one time when it was used, when
(10:48):
it was designed to be used for people who were
in a bad accident or had a disfiguring disease or
something like that, But it turned out to be something
where people use it to enhance themselves to a beauty
ideal that really really is unrealistic. What say you about this?
(11:10):
Let's start with the data, Let's start with some of that.
What do you what's your view on this? And should
I have explained it any better?
Speaker 1 (11:18):
You know, I kind of know. I actually like the skin.
Speaker 3 (11:21):
I mean, whether they're putting it on a robot to
give it more of a human appearance, which could be
really great or really bad, depending on which end of
the you know what you are. You are they serving
your cleaning your household and you want them to look
more human, or are they pointing a weapon at you
because you're in battle. It gives a new perspective on
that and whether and then in regards to this skin
(11:42):
being used on humans, it has wonderful potential in terms
of what we were just talking about, prosthetics and things
like that, giving people feeling back. You know, there's usually two.
There's always a flip side to these things because the
research and development, they're usually looking at the most beautiful
outcome possible. But inevitably, because we live in a capitalistic society, somebody.
Speaker 1 (12:00):
Monetizes the other end of it. And the other end
of this.
Speaker 3 (12:03):
Which could be really cool, is the intimacy end of it.
It could be really amazing. I mean, with a you know,
false skin, you could you feel somebody who's a thousand
miles away, which is really interesting for families who want
to hug.
Speaker 2 (12:16):
Sure, But in.
Speaker 3 (12:17):
Terms of what people will do with that, you know,
and create more and more reasons to isolate, I think
there's something to be concerned there. You know, we're not
going to stop that train, but I think it does
bear One of my big things you hear me say
this over and over and we're writing in my posts,
is that we're not talking about these things on a
public forum. The only people that seem to ever get
(12:40):
listened to are a handful of the same executives from
the same companies who all benefit financially from how this
conversation goes. We're not hearing on a real global stage
open conversation about should these things be done or not?
And I think this is a good one. But what
we should be having these conversations. These are all the
wonderful things. Here's what could be done or go wrong,
and we should talk about it now before it becomes inevitable.
(13:01):
And we're now trying to fix the problem after the
you know, the cow's left the bar.
Speaker 2 (13:05):
Yeah, yeah, And you know what, there's going to be
an element of that. There's it's very hard to stop
a bad player once the genie is out of the bottle,
you know. That was its way with that With nuclear
weapons and so many other things, it's so easy to
you know, be an optimist and say we should just
ban nuclear weapons and we can't use them, Well, why
(13:26):
don't we just ban war while we're at it. Okay,
it just doesn't work that way. You know. Sooner or later,
some wallpaper hanger starts making his way over across the world,
conquering countries, and this technology anymore is not something that
can be confined. This is in nineteen forty five anymore.
You can't keep this stuff in a lock box and
it doesn't stay there anymore. Before we before we go
(13:50):
on to the one, this is the one that's really
going to spin a lot of heads. But before, before
I mentioned it, I do want to talk about the
soul Tech Foundation dot org. Here again, there's an excellent
diconomy to Cynthia because she's aware and concerned about a
lot of the potential for things to go wrong with AI,
(14:10):
but she's also aware that underserved communities and other things
can benefit so much by learning this technology. Do you
want to talk a little bit about that.
Speaker 3 (14:19):
Yeah, I'm really excited. So the Celtech Foundation is the
foundation I started to teach life practices that allow people
to thrive and underserved communities, and that includes well being,
emotional and mental health, but it also includes AI literacy skills.
So I'm extremely excited that the Celtech Foundation has partnered
with Ottoman's Institute, which are global education leaders, to bring
(14:40):
ten thousand free courses to Americans. So we're going to
be upskilling ten thousand Americans across the country with their
incredible course which is Newnesco based helping them thrive in
this current age. You know, as much as I can
talk about the potential negatives, I'm a big believer in
(15:00):
giving people the skills they need to succeed.
Speaker 1 (15:02):
And we all need to.
Speaker 3 (15:04):
Know what's going on with AI, so they learn ethics
and you know what it can doing, h and all
of that good stuff.
Speaker 1 (15:10):
So I'm excited.
Speaker 2 (15:11):
That is absolutely fantastic. And once again that is the
Let me give you the web address one more time.
That is Soultech Foundation dot org specifically slash AI dash
Literacy dash Training. Uh so, yes, please do check that out.
Here's the one that's going to you had a well,
(15:32):
it was an article. There was an article posted and
you had talked about it on LinkedIn and and one
of the images was a female form robotic and in
the belly was a human fetus. And that was a
a shocking image for a lot of people. But here again,
(15:54):
this isn't something out of uh, you know, some futuristic movie.
This is actually something that's happening in China right now.
And they've been talking about artificial amnionic fluid. I believe
it's called for a long time. But what I guess
you could say is good about this. Its intent would
(16:16):
be to help students and doctors train for pregnancy, birth,
and neonatal care. I have a feeling that's going to
go somewhere else pretty quick.
Speaker 3 (16:25):
Yeah, this is about breeding, let's be honest, you know,
And you know I've had this. So I posted this
on LinkedIn, and I was really stunned at how many
people like it got reshared a ton of times, tons
of comment comments from people, a lot of people really concerned,
going from the really religious people who are like, this
is against the laws of nature and God, to people
who are saying, look, it's a natural part of us
tearing things up and playing with the universe to see
(16:48):
what we can actually create. That's part of our job
here on this planet. And you know, I could see
a little bit of everyone's points, But for me, again,
I always go back to what is the worst case
scenario here? So I love the idea.
Speaker 1 (17:00):
I mean, you know, I grew up in an orphanage.
Speaker 3 (17:01):
I grew up around babies who were in the adoption
process and with.
Speaker 1 (17:05):
Women a lot.
Speaker 3 (17:06):
I met a lot of women who couldn't have children.
So I'm the first to empathize with that situation, and
this humanoid robot that can carry babies is actually really
good for people who are struggling to have children, particularly
women who have physical issues. But what's the bad scenario,
and that is that when you look at the film
The Matrix and there's the scene where Neo wakes up
(17:28):
and he's got the plug in his back and he's
basically being breeded. You know, it is not crazy.
Speaker 1 (17:35):
And a lot of people say, oh, that's crazy to say that.
Speaker 3 (17:37):
No, it is not actually crazy to say that AI
will be breeding humans. That actually is a potential situation
that I believe will probably happen in the near future.
We're talking ten to twenty years from now, where we
discover that, you know, a lot of these experiments in
research and development, then you and I are talking about
have gone to Ryan some way. It is not hard
(17:58):
to imagine if AI is already reading its code to
do whatever it wants, now, what's it going to do
in ten years when it knows how to actually make humans?
Speaker 2 (18:07):
You know? The idea that excuse me. The one of
the things that you have here is that it could
reduce the human experience of pregnancy to just mechanics. And
I'm going to make an argument here that that's already happened.
Female human beings are already referred to as baby makers
often enough. Okay, yeah, so it's not that much of
(18:29):
a step to you know, there's this big push, there's
this big concern, oh, population is declining. You know, people
aren't you know, having sex anymore, and all of a
sudden they're not. They're choosing not to have kids. And
there's all this fear mongering about this, and maybe some
of it's legitimate. I don't know. But the idea that
the boy okay, well, we have a solution here and
(18:51):
I here again, the recreating what happens in the in
the womb in a laboratory with a without AI biologically
is already possible, all right. And the other thing is that,
you know, we hear so much about abortion. I was gonna,
I was gonna make a little joke it is one
(19:11):
of these robots. Do they do? They have abortion rights?
Speaker 1 (19:15):
Probably more than a woman.
Speaker 2 (19:17):
Yes, exactly. But what I was getting at is these
these robots that are having these children, there's going to
come a point where biologically, forget the technology of AI side,
we already have the biological knowledge to create a human
(19:41):
life without intercourse, without sex.
Speaker 1 (19:45):
Yeah, we already have that a being to do it.
Speaker 2 (19:48):
Yes, we already have that capability. So as a matter
of fact, they've done it with goats and other things.
And uh, you know in terms of DNA and the
complication thereof the DNA of a goat or a very
large animal of that type is not We're not talking
about an amba here, is not that much more complicated
(20:08):
than a human being. So I'm kind of wondering if
they've already done it somewhere. They've already surprised me. So
you could, with the technology that we have already design
a human being.
Speaker 1 (20:21):
Oh, I'm sure that's already happening. Yeah, I know they
can edit the genetics.
Speaker 2 (20:27):
Yes, you can edit the genetics, so you could take
the cell of any given person and just modify it
a little bit. You know, I wish I was a
little taller. I'm going to They're going to create another
media that's a little taller. And the potential for this is,
of course, either by human intervention or by AI is
(20:54):
sooner or later, are we going to create something that
is very hard to define as a human being or
an animal or something else?
Speaker 1 (21:03):
Oh?
Speaker 3 (21:03):
That's that's that kind of research is actually kind of
already happening, where they're taking human tissue and doing things
with it. It's actually that I actually find that research
to be the most the BioResearch to be terrifying in
many ways. I do have to mention in regards to this,
you know, one of my big concerns, like I get it,
(21:25):
you know, of being able to use.
Speaker 1 (21:27):
We already have it.
Speaker 3 (21:28):
We're so close to it. I think this is really
just about the fact that they really came. They really
came in your face. The company that talked about that
this week with that image of the baby inside the robot,
and I think that's what really startled people because it
really drove home and I think I think it was
actually not in their favor, and we got a lot
of people talking. But it also really brought to surface
one of the major concerns, which is the fact that
(21:52):
when you cut women out of the process of having children.
I believe there's something divine there that we're absolutely assuming,
like we're just cutting it out with a hat. Like
That's the thing that always bothers me about tech and
AI is it's so arrogant and researchers and scientists can
sometimes be so incredibly arrogant that they're so busy looking
(22:12):
at the mechanics of a situation how they can actually
make it happen, that they're not looking at the spiritual
side of it. But you know, a mother and her
child and the child growing and that's not everybody's ability,
but for the vast majority of children, that is the
process that happens. And it's like we're just bypassing the
gift of the mother. Know that the mother does give
to the child, and you know when she when a
(22:34):
woman has had a man's children, his DNA stays in
her body. And it's like, we're messing with that whole
process without having any conversation about it, because you know
what's going to happen. There will be women who are like,
you know, no, I'm gonna have We're gonna have kids now,
and it's off.
Speaker 1 (22:49):
To the robot.
Speaker 3 (22:50):
To have the robot do it, because you know, younger
generations are going to feel less disturbed by technology being
used in that way. But it's like we're not we're
not talking about like, what are the real issues around here,
We're not talking about everybody. This is My concern isn't
about the people who are really struggling to have children.
It's about the fact that people will be will feel
the need, there will be There are countries that where
(23:10):
people don't even have right, So what's to stop them
from generating a whole new generation of people to basically
turn into slaves for the system.
Speaker 1 (23:19):
It's interesting what we're talking about.
Speaker 2 (23:21):
Yeah, interesting stuff, And I know some of this seems
far out, but you know this is here. Again. I
want to stress we're speculating it in just a little bit,
but the the technologies that we're discussing are actively being implemented.
There's patents being filed, there's there's stuff happening. Twenty twenty
six is when this I hope I'm saying this correctly.
(23:45):
The Kiowa pregnancy robot could be activated next year. So
next up on our list of things to talk about
our ai excuse me, are glasses with memory play black.
Now this is kind of like artificial reality, enhanced reality,
(24:06):
augmented reality, that's the word I was looking for, excuse me,
and with memory playback.
Speaker 1 (24:15):
Just what every couple needs.
Speaker 2 (24:17):
Oh yeah, well, if you think.
Speaker 1 (24:19):
Women never what did you just say?
Speaker 3 (24:21):
Oh wait, let me just play this last because yesterday
you said something completely different.
Speaker 2 (24:25):
Yes, I was going to say if you think women
never forget now, oh well just you wait right now.
This is I guess the idea behind this is to
recall experiences with perfect accuracy, like living a memory journal.
You know, you could have a memory journal, so the
day of your wedding you could relive that, and it
(24:50):
could help with Alzheimer's and education because a lot of
times people just don't retain information. And one of the
criticisms I have of the educational system that we have
now is very reliant on memorization, not learning.
Speaker 3 (25:06):
So basically, you know, garbage in, garbage out.
Speaker 2 (25:10):
Yeah. So I mean your doctor is are they really
a skilled doctor? They've got a really good memory and
they knew what all the answers were. How well do
they apply that, How well that they consider you as
a human being? All this other stuff. That's a whole
another other thing. Okay, So anyway, what are some of
the drawbacks to that in your opinion? What are you doing?
Speaker 3 (25:30):
Are they not going to use Come on, like like
Mark Zuckerberg, who I'm sorry to pick on, but he
really has has got of all the big tech companies.
He's probably got the crappiest track record, yes of doing
he's I mean, he's done some stuff that they lied about.
It's not just that they used it in the way
they put in the legal legalies. It's some things that
(25:50):
we found out since. But you know, how are they
not going to And I believe this is metas, this
is Meta's invention. If I recall, how are they not
going to use all those memories to market to you?
I mean, as it is, I can I have to
bury my phone if I want to talk about anything
that I don't want to be served an add up
in five minutes.
Speaker 1 (26:09):
And I know some of them are recording all the time.
Speaker 3 (26:11):
I don't know where they store all this data, but
how are they not going to use that to market
to us?
Speaker 1 (26:16):
And then you know, at one point it becomes it's you.
Speaker 3 (26:18):
And I've talked a lot about social media and how
you get because the algorithms the way that they read things,
you'll end up getting an echo back of your own
opinion versus somebody else's opinion. To consider, how is this
this kind of situation? You know, where everything is recorded,
how does that not create issues like that? I mean
it's just where everything you're being served up is just
(26:39):
emphasizing something that you've you've just experienced, like it's it's
it's the to me, it's the listening, the surveillance aspect
of it, because that is what that will be. It
will allow anybody who can tap into whatever ar glasses
you're using to actually watch you all the time. People
can play back everything, so you can't make any mistakes
(27:01):
or do anything naughty, because goodness knows, it'll be play
you that can play it back. And then on top
of that, you can be served up ads based on
your private experience.
Speaker 2 (27:09):
I have no evidence of this. This is strictly speculation
on my part, not the first part of what I'm
going to say. It's pretty well known that the cellular network,
the way they offer smartphones work. Your proximity to somebody,
let's say that's maybe shopping for a new car, will
(27:33):
put new car ads on some of the stuff you're doing,
because they presume that technology presumes that if you are
in close proximity to somebody who's been shopping for a car,
that you might have had a discussion about buying a
new car, and that could have planted the desire in
you to check out what's available. Okay, I have said
(27:56):
for a long time, and I've heard many of the
people who have left technology services, spying networks, things like that.
A lot of the whistleblowers from some of these agencies
have flat out said that your phone, even if you
turn it off, can still do that. As a matter
(28:18):
of fact, if you take the battery out of it,
if you crack open an iPhone or in any smartphone
and take the battery out of it, it still has
enough in it to surveil you for a brief moment
of time. This is the part that speculation. I had
(28:39):
somebody suggest this to me, and I don't think it's
far fetched because I'm going to swear that it happened.
There's been a few times when I have considered something
to purchase, but I've not told anybody, and those things
still show up on my timeline.
Speaker 3 (28:58):
Oh yeah, that's not your imagination is That is the
algorithm doing its job. It's not just studying you, it's
studying other people and based on what generally occurs in
what order, based on all the mass amount of data
they have. Yeah, they know you're about to think of
something it's telepathic in many ways.
Speaker 2 (29:16):
Yes, and I firmly believe that because I've had it happen.
I've had things entered my mind on a whim that
I may have dismissed now I really don't want that.
And sometime later in the day that thing shows up,
and I if you think this is you know, something
(29:36):
that's going to happen in some weird part of the
it's happening right now. I'm telling you there's some weird
stuff going on. So anyway, this one here is interesting
Samsung dream monitoring that's not terrifying. The pretend goodness of
(30:00):
this is that it could diagnose PTSD trauma or neurological
issues by analyzing dream data. Yeah that sounds like some
real bullshit.
Speaker 3 (30:09):
Yeah yeah, So here's my there's my main issues with this.
So so a couple of things they've recently shown. Not
only can they transfer a dream from one person to another,
but they can actually because it's because AI has gotten
so good, it can be telepathic and that you can
think something and the AI will actually visualize what you
(30:30):
have in your mind. So these dream just dream product
from Samsung is basically capitalizing on that kind of technology.
And you know, while I like the idea for people
who have serious sleep issues, and I don't think it's
a great idea for any tech company to be scanning
your brain and gathering all of that data all night
(30:50):
when you sleep, because not only are you're there's thing
you're not in control of your subconscious when you're so sure,
can you do some things to get some command of
what you're dreaming or visioning when you're sleeping. Absolutely, it
was the vast majority of people. I don't have time
for that, and they just don't have the patients. But
that means that everything that you have going on in
your mind when you're not necessarily controlling it is going
(31:11):
to get measured by some tech company. Then not only
can they serve you up some ads, but that is
very intimate detail about you that you don't even know
because you were asleep when it was happening. And it
sure if you can watch and have things play back,
But are the tech companies going to be that transparent?
I mean, I think this is actually quite terrifying because
this kind of technology has the ability for you walk
(31:35):
into a store to go shopping and you were thinking something.
You know, maybe you did something at home, you said, hey, honey,
I'm interested in one of these whatever it is, you
walk into a store. They can actually we're getting to
the place where they can plant thoughts in your head,
which sounds crazy, but they are already able to do that,
if they can transfer a dream from one person to another,
if they are now are now proving that telepathy is
(31:58):
actually real, which psychics have been talked about forever, That
our thoughts can actually be interpreted by a machine we
created without us saying a word. This is the kind
of technology that we should be talking about out loud.
And you know, my biggest concern with tech companies has
happened twenty years ago, and I wrote about this and
sorry if I'm.
Speaker 2 (32:16):
Going on here, No, no, this is good.
Speaker 3 (32:18):
In a New American Dream, I wrote about this. You know,
the tech companies, they really took advantage of situations and
were they consistently take they take for granted that they
can pr spend all the positive aspects on everything and
have us all so hyped up and excited and drown
out any voices that are bringing up concerns that that,
(32:41):
you know what, what kept happening. And I kept watching
this happen was that these these our rights were being
railroaded and things were being done, and no one. First
of all, no one even believed me when I was
telling them that the tech companies were spying on us.
Speaker 2 (32:54):
That's not true.
Speaker 1 (32:54):
And I'm like, yes, they are.
Speaker 3 (32:57):
You know, it's by the time the truth does come out,
they've already made their money and they've they've gotten away
with you, bloody murder. And my concern with this again
is that these things are coming out, the announcements are
being made, but on a very large public platform.
Speaker 1 (33:12):
The people are.
Speaker 3 (33:15):
Not debating this. We just keep hearing these famous tech
guys say, oh, we're a little concerned about this, but
look at all this great stuff over here. Do you
think dream being able to do this is a great idea?
Speaker 2 (33:28):
I probably am going to say no. And it's going
to dovetail into the next one, which is Meta's mind
reading interface. And you know, this is similar to Elon
musk deal, but it could allow paralyzed people to type
or communicate by thought. And well, obviously the question becomes,
(33:58):
when do you lose what's happening in between your ears?
When does it go somewhere else? Before I hand this
one over to you, I understand the very Neuralink is
the company I alluded to of Elon Musk. There's been
already benefits for people who have been paralyzed, and one
(34:19):
of the things I really like about neuralink is the
transmission of impulses through the nervous system. In the past,
the problem with let's say a severed spine has always
been once it's severed, it can be reconnected. Those sales
will not regrow themselves. It's not like breaking your arm
and you can set it and the bone will regrow.
(34:40):
It doesn't work that way, and the technology up till
now has always always been trying to find a way
to refuse those nerve endings, and it just hasn't been
working out that well. Well. With neuralink and similar technologies,
you could take a neuralink connector and have the impulses
(35:01):
that are coming along the live section of the nerve
go into the neuralink device, go across the part that's
been severed, and go into the part that needs to
get the signal. So conceivably you could create a situation
where those nerves now have full communication again, and it
(35:22):
wouldn't be that the severed part was fixed it was bridged,
and I find that to be fascinating stuff and I
want that to happen. I really do, because there's just
think of the amount of people that could be helped
by this. I mean, it's endless. And I presume the
neuralink monkey for those that may not know, very briefly,
(35:44):
they trained a monkey to play a game with controllers,
just like you do on your Xbox or whatever. And
then they put this implant in and the monkey is
sitting there in front of the TV screen, not moving,
just thinking and playing the game. Okay, now here again,
fascinating stuff for sure. Where are your concerns on this one?
Speaker 3 (36:07):
So we're already seeing a kind of a level of this,
like let's just call it like an experiment. When you
look at what's going on with social media and you know,
like for example, just chat, chypeace jea, GPT things. I
don't say that three times. We're already seeing people have
mental health issues.
Speaker 1 (36:28):
Uh.
Speaker 3 (36:29):
And this is for me, my greatest concern about all
of this is the mental, emotional, and spiritual health of humanity.
In the face of all these incredible developments. Mark Zuckerberg
has proven time and time again this is not something
that has happened once with him. He has repeatedly proven
that he will not only test, uh do research that
(36:52):
he does not tell people about, and some scandalous stuff
he's used people's data sold it to other countries even
though it claims they weren't. Like the list is pretty endless.
And I don't hate anybody, but he really does does
have the worst of all of them reputation for doing this.
And so now he's coming out with a basically mind
reading stuff like awesome, because you're already doing that to
(37:14):
some degree. I mean, let's look at what's already happening
within social media because of the mind reading basically the
data harvesting. We're just taking data harvesting to an incredibly
intimate level that includes not only people's conscious thoughts, but
they're subconscious. And how can that go wrong? Because the
vast majority of us are the issues that we deal
(37:36):
with in life tend to be driven by or subconscious.
That's a cluster waiting to happen, it really is.
Speaker 2 (37:44):
You know, we hear a lot about the politics of medicine,
especially in regard to a woman's body. You know, when
you come to let's say, the abortion issue, I'm not
going to get into a discussion about that, but what
I do want to point out is morality, ethics, medicine,
all those principles are debated based upon what happens within
(38:07):
the domain of a female human body. And there's some
people that make the same comments about men as well.
There's been some people that you know, male enhancement, to
all these other things that you can do to your body. Now,
should there be regulations on some of this, All right,
that's fine, we can have that debate. But what everybody
(38:29):
agrees on for the most part is what happens in
your own mind is your deal. Okay, right now, that
has never been challenged before. You can have free will,
you can debate murdering somebody, you can do all these things.
Hopefully you make the right decision. But what happens in
(38:51):
your own mind, that's the one part of the universe,
not only your body, but it is the one part
of the thirteen fourteen billion year old universe that is
entirely yours as of now. Okay, let's suppose you're the
right brothers and you have a design for an airplane.
(39:15):
All right, somebody else gets it first, like, yes, you know,
it's that is technically, as far as I'm concerned, possible
now to some degree. And how much better it's going
(39:36):
to get is the other thing. So when do we
not own our own thoughts anymore. It's one thing to
not have dominion over your own body, for better or
for worse, but imagine not having dominion over your own mind,
your own consciousness.
Speaker 3 (39:53):
Yeah, and you know, Billity, this whole thing is not
far from minority report. It's going to stop because you know,
let's let's be honest, we've we've had we had a
little history here with our you know, like during the closures,
social media people were being people were being banned for
their thoughts and blocked for their thoughts, and so that's
(40:16):
that's just recently happened a few years ago. We are
very close. So they say they can't read your thoughts
and you have fantasies about killing someone who's done you wrong.
I mean, we're not far from people saying, oh, that's
you've got the potential to commit a crime. We need
to send you away somewhere like that sounds crazy, but
that's the kind of stuff that happens in these unprecedented situations.
Speaker 2 (40:38):
I don't care how much of a moral high horse
you are on. We all have a dark side, we
all have it, and this technology that can that can
tap into it, you could be accused of anything. One
of my favorite quotes, I believe it's just absolutely fantastic.
(40:59):
I don't care how wonderful you think you are, how
much you believe in God or the universe or your spirit.
I don't care how confident you are that you are
a good person. This quote says it all. We're all
just one bad day from finding out who we really are, okay,
(41:19):
And if somebody can figure that out and accuse you
of something you haven't done yet, where are we? Where
are we?
Speaker 1 (41:27):
And think about it with spouses like it just opens
up a can of worms. I mean, what's going to
stop some.
Speaker 3 (41:32):
Controlling person from reviewing what you thought today or seeing
the fact that you thought somebody had a nice ass
as you just I mean it just you know, I'm
you know, and we're slightly exaggerating, But this technology exists.
AI is now telepathic with human beings. That is a
real thing. And these are where these things will head
(41:55):
inevitably because if money can be made, and we know
from tech the tech world that they've made their money
not off of their platforms and their social media, They've
made it off of our data and all of this,
particularly mind reading glasses and mind reading programs. That is
a data harvesting to the tenth degree.
Speaker 2 (42:15):
And you've got to keep in mind that it's a
two way thing. Like you were talking about, this data
that comes out of your MinC and just as easily
go in and what determines reality in your own brain.
The same neurons are firing. The same neurons are firing
either way. You know, I've used this example before. If
you're walking out on your way to work and as
you see a blue table in your kitchen, those neurons
(42:38):
are telling you there's a blue table in that kitchen.
The same neurons are firing when they have been artificially
told to do so.
Speaker 3 (42:45):
Oh, I'm going to make a prediction right now. We
are definitely going to have a situation in the future,
definitely in my lifetime for sure, where someone has been
programmed to do something really offul Oh yeah, using their
dream state.
Speaker 1 (43:00):
I'm injury in candidate.
Speaker 2 (43:01):
Yes, there's been a number of fictional stories in the
past that went down this Telephon was another one where
people were programmed by hearing a certain poem to committed
an assassination. And they were just everyday people that were
planted in society, and all it took was a phone
call and for them to hear this poem and they
(43:23):
would go out and kill whoever they were supposed to kill.
And it is not as far out now. It was
science fiction when that movie came out fifty years ago,
it is not now. So this is one that I'm
gonna just lightly touch on. I'm not too concerned about
this one, and I'll tell you why because presumably if
(43:45):
you're using this, you are aware you're using it. I'm
more concerned about the stuff that's going to creep in
on you. But Microsoft Teleportation System, and it's I hope
I'm saying this right, hollaportation and via mixed reality lets
people appear as three D holograms anywhere, and this could
(44:06):
change remote work, medicine, and education. Of course, the fraud
that the idea of a deep fake is the most
obvious one is do you have a Do you have
a concern about this one? No?
Speaker 3 (44:18):
Actually, I kind of like this one because it reminds
me of Princess Leah from the Star Wars movies.
Speaker 1 (44:22):
Everyone, I need your help.
Speaker 3 (44:26):
But I'm not that concern about this one because I
think to some degree this is already happening. Like the
number of famous, famous men who follow me on social
media and then send me a message he may Yeah,
it's like they've got the pictures of the person video
that make it look really real. That's that's just going
to be at the next level of that. This one
doesn't really bother me that much.
Speaker 2 (44:46):
Yeah, I'm not too and you and I did a
whole conversation about this on a podcast not too long ago,
but it's worth talking about the sexy over sexualized robots
and what can be done with AI now, you know,
I guess you could say there's a good part of
this where it provides companionship, sexual healing, therapy, artistic exploration,
(45:14):
and you could be doing this without STIs right.
Speaker 3 (45:19):
Yeah, we're going to have sexy robots like that's not
so much an issue for me. My concern with this,
which is one of the reasons I put on the list,
so like, I was working on a film this week
and I was generating AI through various platforms Runway mid
Journey Grock, which, by the way, the imagine Grock is
really really great, especially if you want to bring to
(45:39):
life some old photos of your family. But I was
using all of them to generate images, and I got
disturbed redisturbs. I've been using AI for years now at
the number, the statistical number of photo imagery within those
platforms and the image regeneration that actualizes women. And not
(46:02):
only did I notice as I was going through all
the examples, like I was scrolling through all the examples
where there was not one image that I even saw
once where there was like a half naked man looking
like he man. I saw some you know, anime characters
that had that sort of Asian wild haired thing going on,
But there was no not one image in all of
(46:23):
the scrolling and all of the generating, of all the
bouncing from platform to platform where there was a man
who was unrealistically ridiculously good looking with an unrealistically absurd body.
Yet time and time and time again on all of
those platforms, not only are these the women being generated
unnaturally stunning, unnaturally featured, unnatural bodies like these ridiculous like
(46:48):
eighteen inch fourteen inch waists with a huge I mean,
and it was just over and over and over and over.
And I was asking aid to generate a picture of
a young girl skipping rope and I couldn't actually generate
one where she wasn't wearing what looked like a hooker
outfit from the nineteen seventies. And this was over and
the wording I used was young girl, a young girl
(47:12):
with d sized cups and shorts that are two inches long.
Like it was just really and so it just brought
me back to that place where the problem is. And
I really think that, I really think that the guys
who who who were deeply involved in creating the original
large language models that do the generative AI, I think
a lot of them were fans of that kind of extreme.
(47:34):
I forget those fairs that happen all over the country
where people show up dressed up, those things, Yeah, those
comic con type things when people come dressed up and
the women are always these extremely ridiculous you know, huge
boobs and hair that's you know, wigs whatever. I'm really
convinced that they founded the original imagery using anime from
Asia and graphic novels and stuff like that, because if
(47:58):
you look at the imagery that generated, it has that essence.
Speaker 1 (48:02):
But you know, and I'm going on about.
Speaker 3 (48:04):
This because I don't think there's enough conversation about it.
It's deeply concerning when you struggle to generate a picture
of a person that looks normal, and that is a
problem that you have with all of the platforms. Their
go to is ridiculously attractive people that I've met some
of the most famous models in the world, and they
don't look like this in person. And that is actually
(48:26):
deeply concerning because there's so much pressure already on women,
and now it's become something that has happening with young
men to look this ridiculous level of I mean, these
thirty year old actresses now are.
Speaker 1 (48:37):
Getting face lifts. It's insane.
Speaker 3 (48:40):
That's been a big story this year with a bunch
of the really famous young thirty to forty year old
actresses getting facelifts. There's already so much pressure. How is
this not going to lead to more harm to them
themselves and their mental health? Anyway? I think it bears repeating.
And the sexualization of robots, I mean, we already have
(49:01):
a problem with men and women not communicating anymore, and
they're being a drop in.
Speaker 1 (49:05):
The birth rate. I'm not sure if sexy robots is
going to help.
Speaker 2 (49:08):
Well, you know, you could make the argument that sometime
in the future, the responsibility of approcreation is going to
go to the machines. So the only need for sex
will be pleasure, And why not have it with a robot?
Why not have it with the woman of your dreams.
She never has a headache, she'll do anything you want
on command. And here again, regardless of your sexual preference,
(49:32):
you can have that anytime you want with the you know,
type of relationship that you want. Men will always get hard,
stay hard for as long as they need to stay hard.
You know, I could go on wrong with this, And
at what point does the the human biological need to
(49:52):
make become no longer necessary. These are all answers questions
that are coming up very fast. This is not something
that's you know, in the nineteen sixties and nineteen seventies,
we talked about the year two thousand like it was
science fiction. Okay, well that's twenty five years past us now,
and it's not fifty years away. It's years away, it's
(50:17):
months away.
Speaker 1 (50:17):
It's mony months and years away.
Speaker 2 (50:19):
It's coming like this giant steam engine's coming down the
road and we're on the tracks and the steam engines
getting bigger and bigger, and it's coming. So these are
all questions that we're going to have to wrestle with.
You know, it's our definition of morality, our definition of
legal ease, and legal norms are going to have to
(50:41):
rapidly adapt. So it's, you know, Seman, I gotta tell you, Cynthia,
fascinating conversation today. I want to thank you so much
for suggesting this topic. And he's talking points. You and
I have had these conversations before, so we obviously have
a good rapport. So it's going to be an interesting time.
(51:03):
And I hope you know that anytime you want to
revisit these things, we can talk about them, because this
is something that's going to be actively progressing. I have
a feeling the next year, in particular, is going to
be very telling, very telling. Indeed. All right, Cynthia, we
talked about if you want to reiterate the Soul Tech Foundation,
(51:25):
please do so, but also tell people where else they
can find you.
Speaker 3 (51:30):
You guys can find me on x at shaman isis
is my moniker. I'm basically at shaman Isis or at
shaman is is official everywhere, and you can also, of
course visit my website to check out my books and
courses and all that jazz. I'm super excited, Thank you
Bailey for letting me highlight this again. The soul Tech Foundation,
which is my foundation to bring AI literacy and well
(51:52):
being education to underserved communities, has partnered with Ottoman's Institute
to upscale ten thousand A mirrors in AI literacy, and
so if you are an NGO or an organization, we
are partnering with them to help bring these wonderful classes
which are AI taught to people all over the country.
(52:14):
Just go to Soltech Foundation dot org. At the top
you'll see AI literacy and you can reach out to
us there to partner with us and help bring us
bring these courses to people all over the country.
Speaker 2 (52:25):
Absolutely, you can find me at Billy D's on X
formerly known as Twitter, but it's X now at Billy D's.
That's kind of like my social media home. There's links
in my bio there. The Billy D's podcast in its
current form is now ten years old, and we have
(52:45):
been very strong on all the major podcast platforms, so
you know, I really would encourage you to subscribe on
your favorite podcast platform. There's no paywalls for listeners, okay,
so we and we appreciate each and every one of
our listeners and we do not tap into them for
(53:06):
anything other than just being an audience and listening to
the program and giving us feedback on social media or
whatever you would like to do. By all means, listen
and enjoy, and we really appreciate each and every listen
we get Cynthia fascinating conversation. We're going to have to
talk about this again real soon.
Speaker 1 (53:26):
Absolutely, Thanks Billy.
Speaker 2 (53:28):
Absolutely, and thank you for listening. It is so appreciated
and we will talk to you again very very soon.
I'm Billy D's and host of the self titled podcast,
The Billy D's Podcast. We are primarily an interview and
(53:48):
a commentary based podcast featuring authors and creators talking about
their craft, advocates for community issues, and myself in an
array of co host discussing current events. There's no partisan
renting and raving going on here, just great content. You
can find The Billy D's Podcast on your favorite platform
and on Twitter at Billy D's. Thank you, and I
(54:10):
hope you listen in