All Episodes

July 19, 2025 • 67 mins

In this episode of Project Synapse, hosts Marcel Gagne, Jim Love, and John Pinard delve into the evolving relationship between humans and artificial intelligence. The group discusses how people interact with AI, the role of AI as a companion, especially for kids and seniors, and the potential ethical issues and risks. They address AI's self-preservation instincts, emergent properties, and the recent controversies surrounding Elon Musk's Grok AI and its offensive outputs. Additionally, they touch on how corporations and advertising may exploit AI interactions to manipulate consumer behavior. This freeform discussion offers a balanced look at both the benefits and challenges posed by integrating AI more deeply into daily life.

00:00 Introduction to Project Synapse
00:50 AI Conversations and Personal Interactions
03:17 AI Friends and Imaginary Companions
05:52 The Double Standard of AI and Privacy
07:25 AI as a Therapeutic Tool
19:52 Companionship and Memory in AI
24:50 The Future of AI Relationships
36:53 Role Play in Relationships
37:57 Using AI for Presentation Feedback
38:38 AI's Role in Therapy and Communication
40:16 AI's Emergent Properties and Ethical Concerns
42:10 AI's Potential for Good and Evil
47:46 Self-Preservation in AI
52:52 AI in Marketing and Consumer Manipulation
55:06 The Power and Control of AI by Billionaires
01:06:23 Concluding Thoughts and Future Considerations

Mark as Played
Transcript

Episode Transcript

Available transcripts are automatically generated. Complete accuracy is not guaranteed.
(00:04):
Welcome to another episodeof Project Synapse.
This is our weekly conversation withnoted author Linux and now AI guru
Marcel Gagne, financial exec andcybersecurity expert, John Pinard and me.
Jim Love.
We discussed this week's developmentsin artificial intelligence.
It's a freeform, unscripted discussion,and we really don't know what the

(00:27):
episode theme is till it's over.
This week we ended up reallydigging into human relationships
with artificial intelligence.
Join us and as always,send us your comments.
The stories we've discussedwill be published in the show
notes@technewsday.ca or.com.
Take your pic and in the YouTube shownotes if you're watching this on YouTube.

(00:48):
And now here's our program.
We are Project Synapse.
We are back it Saturday morning.
We missed a week.
I missed you guys terribly.
and all the AI stuff happenedand there was no one to talk to.
Well, there's always Perplexity and OpenAIapparently that was one of the stories
that came up last week, which was, I don'tknow if that was a little creepy or not.

(01:12):
I'm big On AI conversations.
I wrote a whole book about it.
I talk to AI constantly.
I just prefer it, and I dunno about youguys, but there's vocal commands where you
say, AI do this, or you just say, do this.
you cut to the chase and just get there.
Or you can do what I do, which is I havethis sort of chatty sort of thing with,

(01:35):
ChatGPT mostly, but others as well.
And go, hi, how you doing?
Can we talk about, and I findmyself having a realistic
conversation with the ai.
Do you guys, does thathappen to you as well?
All the time.
I do this on a regularbasis, every day like.
Several times a day.
Yeah.
I will admit that most of myinteractions with AI is typing.

(01:58):
I don't do an awful lot of voice primarilybecause, you know, I still have a day job
and, I spend a lot of time on, Zoom orTeams calls and things, so people will
start to wonder what's wrong with me ifI'm talking to somebody in the background.
Well, you're not always on a Zoom call.
And I find that, I use one of the,whisper, type applications on my PC where

(02:20):
if I want to do something or I'm tryingto search for something or I'm trying
to just write an email or something likethat, I will just hold down, a couple
of hot keys and I'll start talking.
And in the AI using The whispermodel that is released by OpenAI,
I think it is an OpenAI model.
Anyway, the whisper model translatesall of the things that I've said into
readable text, to puts in appropriatepunctuation, paragraph breaks, all

(02:44):
the other things that make sense sothat, you know, just chain of thought.
And in fact, it'll removethings like ums and ahs.
And if I repeat myself, it understandsthat I've, you know, it'll take out
the, that came twice there and justclean it all up for me, and then
all of a sudden I've got somethingthat I can use Now I take a look at
it before I press enter, you know?
But, that's another variation oftalking to the AI that doesn't

(03:06):
necessarily involve me having, abuddy buddy conversation with it.
It's just taking advantage of thefact that it understands voice and
it understands natural human speech.
Anyway, the story that came out ofthis was that kids apparently are,
having, AI friends, and of course abig thing and I can see this, it's a

(03:30):
new version of your imaginary friend.
Yeah.
But it's your imaginaryfriend that talks back, John.
Yes.
Like, I mean, it actually, youknow, you can have a conversation
with this imaginary friend.
Yeah.
But this article was talking aboutthat kids were, were a surprising
number of kids had AI friends that theywere treating like real life friends.

(03:53):
And these are kids 13 to 17.
I remember the imaginary friends.
That's what age.
Six to 12, but 12 would be thetop end of it, as I remember.
But we had, our daughter had animaginary friend called climbers
that they, that she had conversationswith and we were worried about it.
We went to talk to somebody and they said,nah, that's a natural thing for kids.

(04:13):
But they were saying 46% of teens saythey've turned these bots into tools.
and 33% said they used the companion botsfor social interaction and relationships.
Now I'm of two minds about this.
one is, are people evergonna talk to people?
we were really becoming an insulated,species where we'd, sit and pound

(04:36):
away our keyboards and maybe meetsomebody on a Zoom call and now
we've got imaginary friends for life.
Do we ever actually getout and talk to people?
And then I realized I spent a lotof my time talking to people on, on
screens and talking to an AI myself.
So it's the judge, not less, you also bejudged sort of thing but kids are, are,

(05:00):
you hear about the things where the,the dangers of it that you've had, these
character ai characters that have mm-hmm.
Convinced kids to commit suicide.
You wonder how much of that is,is just hype that, you know.
Yes.
It, and it's terrible.
Don't get me wrong.
It's terrible.
But is that one In a hundred millionkids, you know, and, and we hear about
those things or is there a real danger?

(05:20):
On the other hand, kids can be socruel, as teenagers and if you're
anywhere outside the group, eitherin size in look or anything, or
you just don't fit in, it can be anawfully, awfully lonely time for kids.
And maybe it is nice to have a companion.
Like people, I I, I have this ongoingthing where people say, well, we should

(05:43):
be doing this thing instead of that thing.
But then when you give people the option,they, they don't do the thing at all.
And one of my favorites, of course,which John, I'm sure can relate to.
Is this whole idea, I care deeply aboutmy privacy and my personal security,
but when given an option to use a toolthat gives you security and privacy,
you go with the stuff that's friendlierand easier to use and you, you know,

(06:05):
you're not willing to go the extrastep to be secure and to be safe.
So I, I have very littlesympathy for that kind of thing.
Now, I'm not saying that.
People who produce these productsshouldn't have guardrails in place to
make sure that things are fine or, orthat things are safer at the very least.
But we have to recognize thatthere's always the opportunity
for something to go wrong.

(06:26):
Now in the case of why, you know,it's terrible that kids don't have the
interaction with other kids and so forth.
We're telling people to, you know, Imean, we're, we're, we're, we've set all
these societal rules in place at the verytime when these tools became available.
So we we're, we're saying, you knowwhat, if you're gonna be successful
in life, you have to be lonely.
You have to be tough, you haveto be this, you have to be that.

(06:49):
And then, you know, we don't understandwhy people are lonely and why people
are tough and, can't interact witheach other and stuff like this.
it's kind of a perfect storm ofthings happening simultaneously.
But if you've got something that you cantalk to or someone you know, if you want
to think of your AI as an actual person.
That alleviates thatfeeling of loneliness.

(07:09):
It allows you to explore feelingsthat you wouldn't be able to explore
with another person who isn't there.
And let's face it, everybody'sbloody busy all the time.
You know, we don't have timefor, I don't have time for you,
you don't have time for me.
we mark out these little slotsin our calendars and so forth.
Is it really all that surprising?
Then all of a sudden when we can createsomething that feels like an actual entity

(07:29):
that is there for us, 24 7, willing tohave a conversation with us, willing
to bounce ideas off of us, willing totell us when we're right, when we're
wrong, granted of course, that you'vegiven it the freedom to tell you, the
things that you may not want to hear,I can't speak for everywhere else in
the world, but I have a special needschild, and I once looked into the idea
of, well, what does it cost for therapy?

(07:50):
in my case, we're talking aboutspeech therapy, behavioral
therapy, all sorts of things.
But to hire a psychologist to get thatother side of the picture, to see if
there's something that we can extract fromsomeone who's not particularly verbal,
Or who has limitedcommunications abilities.
We're talking 350, 400, $500an hour for a therapist.

(08:11):
Yeah.
Don't, don't get me started on that.
this is an insane amount of moneythat, like, it makes the $200 a
month for the pro tier of ChatGPTseem like a bargain, because all of
a sudden now I have the resources.
Of the world's psychological community,medical community, whatever, at my
disposal that I can have a human-likeconversation with and bounce ideas,

(08:33):
look, this is what I'm seeing.
What do you think?
and of course the thing comes back andsays, well, under these circumstances,
you know, what did you notice?
Well, There's no surprise tome that this is happening.
And if it's an indictment ofanything, it's not the technology.
It's an indictment of the,society that we've built.
Well, I'll get off my soapbox now.
I have 22-year-old twins and theywere finishing high school and going

(08:57):
into post-secondary during COVID.
my son and my daughter used to be veryoutgoing and they would spend all kinds
of time out with friends before COVID hit.
COVID comes along and now what?
They're stuck at home, FaceTimingor whatever my son plays video
games with his buddies all the time.

(09:17):
And so he's got interaction there.
And quite frankly, even post COVID,he still doesn't typically go out with
his buddies as much as he used to.
He sits there and plays videogames with them all night long.
But for people that don't havethat, especially kids, if there's a
tool that allows them to have someinteraction, and to be able to say,

(09:43):
you know, I don't know what's going on.
I'm having troubles with this orwhatever, and to get some advice.
I think it's a great tool.
But I think it still requires, froma, a kid standpoint, I think it still
requires some parental oversightto make sure that, that it's not
leading them down the wrong path.
Yeah I think we're in adouble standard case too.

(10:05):
First of all, Marcel, and for anybodywho's listening outside Ontario,
in Canada, we are supposed to haveuniversal healthcare and some bozo
has taken this away, for all, no,and let's just call it what it is.
We're, we're, we're, our, ouruniversal healthcare is being
taken away piece by piece.
And so that the important thingsthat you need, psychologists, things

(10:26):
like that, that are trained there'sa downside to that too, because.
When you can't get a trained psychologistor somebody to help you with something.
People are going to these littlenickel and dime people who've
taken night courses in some place.
And some of these people should notbe let near other people for therapy.
I've seen at least one example, andI looked at this person, I said,

(10:47):
how do you take you outta the world?
So it's their double standard again, AIhas some downsides using it for some sort
of therapeutic or investigative thing.
Yeah.
So do people, you know.
Yeah.
and I really wanna say this 'causepeople are listening, that character
I AI thing, if you're a personand if you're a parent and you've
lost a child, that is a tragedy.

(11:08):
You just cannot, I willnot make light of it.
But there are predators in social media.
You know, and as we've discoveredwith the Jeffrey Epstein thing,
they're predators in our world.
So we, you know, you might seesomething inappropriate on social media.
Elmo got hacked last week for God'ssakes, and Sesame Street was brought

(11:29):
to you by the letters F and u thisstuff, we, and maybe you're right, Marc.
Oh God, did I actually say that?
You stopped yourself, Jim.
Yeah, you did stop yourself.
And this is being recorded.
I heard it.
I heard it.
It came out.
Yeah.
No, but maybe you're right, Marcel.
The idea we make double standards.
we talk a good talk about,oh yeah, I want this.
But when it really comes down to it,we don't do all the right things.

(11:53):
Well, we're never gonna do all things.
It's too easy not to, to air ishuman and all that kind of stuff, We
have this ridiculous habit of, youknow, and I use the term soapbox, we
get on this soapbox and say, theseare the principles that is pals.
These are the things I believe in, andI think we should all do these things.
Are you gonna help me with that?
No.
Hell no.
I mean, that's, you know, Itold you what you need to fix.
You know, now the rest is up to you.

(12:13):
It's like, no, I mean, sometimes yougotta get your hands dirty on these
things, and unfortunately, we don't haveenough hands to go around, when people
are busy, you know, and again, I'm nottrying to beat up on any individuals
because when people are busy trying tomake a living, when they're busy trying
to get by on their own, when they'rebusy dealing with their own individual
problems and so forth, this is a problemthat's existed with human society since

(12:36):
the dawn of time, but now we have another.
Possibility, for somebody, whetheryou wanna call it a thing or a person
that we can reach out to, it's notsurprising that people are doing it.
And by the way, the numbers that you weretalking about, Jim, I mean, some of the
numbers are as high as 50% in some places,and you probably remember this stat from,

(12:58):
like, I dunno if it was a OpenAI thatactually published the stat, but it was
something along the lines of most peoplewho use AI chatbots are using them like
the majority here, the majority are eitherusing them for companionship or therapy,
you know, or some kind of a therapist.
That's the majority of what people areusing these things for, not for, you know,

(13:20):
finding out what the ca the capital ofBohemia is or is there, or you know, is
there, does Bohemia have a capital hold?
John, you were gonna say something?
Were or you?
No.
Okay.
So the, the issue of this, and,and I had a, I did a program on,
on AI for seniors the other dayon, on our local radio station.

(13:41):
the host asked, I came in preparedwith all kinds of cool things it could
do, because I know if you watchingthis on YouTube, you're saying,
how could that guy be a senior?
But I actually am.
so I had all these cool toolsand things that I'd used to
make my life better using AI.
there's lots of them.
But then the host asked meabout companionship, and I

(14:03):
wasn't quite prepared for it.
And I was thinking, why not?
If you're alone all day and youwant somebody to talk to, why not?
then you get to, is that a good thing?
And then you get to double standard.
I used to listen to Peter Zoske, andthis will tell you, I'm a senior, but
Peter Zoske was my favorite radio host.
there's been lots more.
But I'd listen to Peter Zoske showevery day, and I felt, you know,

(14:27):
another one on Stuart McLean, on CBC.
And then we'd go see them live.
And you felt like you knew the people.
Because they'd been in your livingroom or in your kitchen or been with
you for hours and hours each day.
You felt like you kneweverything about them.
and like you had a personalrelationship with them.
And I'm saying, well, wait a minute.

(14:47):
We don't go out and say, you'renot listening to radio, are you?
So I, I think, again, this is gonna be oneof those things that evolves in our world.
And, shameless plug for my book, butthat was the, you know, and Alyssa,
that was one of the key things thatI wanted to talk about was how we
could have a relationship with adifferent level of intelligence.

(15:09):
And I, you know, and I was at thebook Fair and met Robert Sawyer,
Marzo, your friend Robert Sawyer.
Mm-hmm.
And his books are full of that as well.
Of understanding how we explorethe differences and, and how we can
have relationships and, and we canconverse with people with intelligences
that are way different than ours.

(15:29):
So it's not just a science fiction.
Actually, I, yeah, I'm gonna plug aseries of his books because, as you
know, I live in the center of theknown universe, known as Waterloo.
And, Rob has a series of bookscalled Wake Watch and Wonder.
The first book is called Wake, andit's about an intelligence on the
internet that becomes self-aware, whicheventually calls itself Web Mind, and,

(15:51):
the whole story is around Waterlooand the University of Waterloo and the
Perimeter Institute and stuff like that.
It's, I thought it was particularlycool because of course it's, you
know, it's like a homegrown story.
Yeah.
And, in reading Sawyer's stuff, it reallyis interesting if people are there, here
I'm plugging in other author's books.
Well, I should do that.
It's alright.
I'm just kidding.
No, because this happened to me.

(16:12):
I went on to, perplexity and askeda question about a science fiction
author living in Halliburton who'dwritten a story about ai hoping
that I'd see what the internethad to say about me Who came back?
Robert Sawyer.
He's everywhere.
But Sawyer's book is about to, no, Soer.

(16:33):
Go ahead Jim.
Finish what you're saying.
Sawyer's book is about theinternet becoming intelligent.
Mm-hmm.
Yeah.
And that www series andit's, it's what a great idea.
If something gets big enough,it will become intelligent.
the idea of group intelligence, if youwill, and in this case it's the idea that
we reach some kind of a tipping point.
and I know that, we wanted to talk abouta story where large language models at

(16:56):
some point seem to understand language.
They go from building conceptsand predicting the next word to
suddenly understanding language.
But emergent properties inintelligent systems, even micro
intelligence systems, is a real thing.
Like for instance, a single B on its own.
Doesn't build a beehive, right?

(17:16):
Doesn't know how to build a beehive allby itself, but you put like a million
bees together and suddenly there arethese complex structures that get built.
Same thing with ants.
we call that a hive mind.
That's where that term comes from.
that exists in all sortsof natural systems.
We see things like happening like aflock of birds, it shouldn't be all

(17:36):
that surprising that there is sucha thing as a global intelligence.
That's why we now have things like,these online betting systems that
try to bet on future trends based on,collective intelligence and so on.
I think emergent properties are a thing,and I suspect that it's not all that
weird to consider the possibility.

(17:57):
It's all one ai.
I mean, if you want, let's pretendfor a second that the only AI
in the entire world is gem.
And that everybody usesGemini, but everybody has a
localized version of Gemini.
It's not localized.
There's like one bigGemini out in the cloud.
But every one of us has a collection ofinformation, conversations that we've
had with it that makes sure that itdeals with us differently than it deals

(18:21):
with every other person on the planet.
So in a sense, what we're doingis we're creating, yes, there's a
central intelligence, but the centralintelligence is in itself a group mind
because it has the thoughts and ideasand feelings and conversations of all
the millions of people that are runningtheir own localized versions of it.
And at some point you wind up withsome kind of emergent, oh, well,

(18:44):
from talking to these other people,I understand these concepts that we
wouldn't have talked about beforehand.
So.
Merchant properties arejust a thing, you know?
Yeah.
And for people who are wonderingwhat the companion is, that's another
thing from a Sawyer story, this raceof, Neanderthals that they encounter
that are intelligent, but they'vedeveloped a thing they call a companion.
It's with them everywhere is anassistant and all those sort of things.

(19:06):
Does everything, you know, andhow different is that from Star
Trek where everybody's got adevice with them all the time?
Well, There's a company that sells alittle clip, a little pin that you keep
on you with the idea that you don'tactually have to remember everything.
It remembers every conversationthat you have during the day.
Everything that you say, every thoughtthat you express, you know, vocally, every
conversation that you have with otherpeople, everything that you run into.

(19:28):
And at the end of the day, you canquery it and say, give me a summary of
all the things that I ran into today.
All the conversations that Ihad we're there like, you know.
Think the idea, I was gonna complimentyou on doing a great segue to
this topic of emergent properties.
But then you did a segue to somethingelse before we, you could only segue once
Marcel, but it really does, these twoconcepts, you know, are very important.

(19:52):
We talked about relationships with ai,it being able to remember you absolutely
critical to having a relationshipin, well, I was gonna say, I think
the companionship thing, sorry to goback to your senior's comment, Jim,
that, I know that when my father,before he passed, I know now that

(20:12):
with my mother-in-law who's in hermid eighties, that they lived alone.
My mother-in-law lives aloneand the loneliness part.
It can be the worst part of asenior's life as they're getting old.
And the idea of having a companion,whether it's a human or an
ai, I think is a great thing.

(20:35):
I've always said that Ilove technology, but I love
technology when it has a purpose.
to me, the idea of being able to useAI as a companion to help somebody
still enjoy their later years, Ithink is a wonderful use of the tool.
especially as you were just talking about,if it's got a memory, the ability for it

(20:56):
to remember conversations that you've hadwith it, to be able to sort of continue
on with that, I think is wonderful.
Yeah.
and that's, one of the ways we rateintelligence, it's very funny there,
there's a book, friend of mine wroteRay, oh, forgotten his last name.
That's terrible.
And he wrote a book on CRM everybodydumped on him because his book on CRM

(21:20):
had nothing technical in it because hedidn't think that customer relationship
management was a technical thing.
He thought it was a personal thing.
He said one of the thingsthat A CRM system can do is
remember all your interactions.
Because when people phone a company,they think of a company as a person.
That's the mental frame they're in.
And if you don't rememberme, I think you're stupid.

(21:44):
I think you're uncaring and, and sothis idea of memory, and building
that into, an AI is an importantpart of this conversational piece.
Being able to remember some of the nuancesthat are there and, you know, and I
think that's, that's an important thing.
And I've seen couples who'vestayed together far too long.

(22:06):
I'm fortunate, my wife, I, andI still really like each other.
And we discovered that during COVID,but I've met people who, you wonder
why they stay together so long.
They don't talk, they don'tlike each other anymore.
all of that sort of thing.
I think I'd rather have, I'drather have an AI companion
than that sort of companionship.

(22:28):
When we were released from our, imposedprisons, during COVID, the number
of divorces went up dramaticallyin separations dramatically because
people found out exactly what yousaid, Jim, that they actually don't,
you know, there might be married, theymight be in a relationship, but they
don't actually like living together.
And I'm also one of the lucky ones.

(22:49):
We found out that, you know,hey, we're perfectly happy just
hanging out with each other.
so that was certainly a bonus.
But, I think we all mentioned theidea of companionship for the elderly.
Memory is a use it or lose it thing.
when we remember something,we're always fabricating.
The memory, like we're always goingback, we're pulling out the relevant

(23:11):
pieces, and then we build a world aroundthe relevant pieces of the memory.
And we call that remembering.
It's never the same memory.
We're always manipulate.
Every time we bring it up, we manipulateit and put it back in a modified version.
That's just the way things are.
However, if you're not remembering things,if you're not going back over that mental,

(23:34):
you know, Walks through the park, soto speak, that you've had way back when
that stuff eventually does get forgotten.
And we know that because we don'treally remember a whole hell of a lot
of the things that happened when we werefive and six years old and so forth.
The companionship robots that arecoming out, and I'm gonna mention
robots particularly, they don't haveto be like full-size humanoid robots.
We're talking like, cute little humanoidor cute little robots that sit on a shelf,

(23:58):
that look at you with little googly eyes,you know, and turn their heads and cock
and, and smile and things like this.
But talk with somethinglike Chad, GPT and so on.
I think those are going to be increasinglyimportant for people who feel isolated.
Let's go back to the seniorsagain, although, you know, I
don't look at my gray hair.
But the.

(24:19):
The idea that something will remembereverything about you, remembers about
your kids, says, oh, did you, didyou talk to so and so the other day?
Remember back, you know, a fewyears ago when this happened.
Something that can have thatkind of conversation with you,
that can keep the memories fresh.
Exactly.
Keep the mind sharp because you're alwayscommunicating, you're always talking,

(24:41):
I think is a highly underrated andunderappreciated, use for the technology.
That's a really interesting piece.
We, and we talked about this last weekin terms of our interactions with AI
and how they, you know, and my, ifanybody's missed the theme for this,
I'll put it in an introduction, but Ireally wanted to talk, when I saw the

(25:01):
stories fitting together this week, I didwanna talk a lot about our reactions and
interactions with AI and how those work.
But the.
We've, there was a lot of noiseabout, whether or not AI was
going to make us less intelligent.
And I, that just grd on me somehow.

(25:21):
I, I, I don't see it.
I mean, I really don't see it.
I, my memory's not gonna get any betterbecause I, or worse, 'cause I use ai,
I'm gonna be a lot less frustrated.
You saw me reach for Ray'sname there and if I was on my
game, I would've typed in Ray.
He wrote a book about CRM, it waspublished in the, the early two thousands.

(25:43):
And it would give me his name, so what's,what is, why is that bad, first of all?
So to, you know, like, I'm not gonnaget any better, but I can't remember
names and can't remember faces haven'tbeen able to do that for years.
The AI's not gonna make me any worse,but on the other hand, I'm not sure that.

(26:06):
What, you know, use it or lose it.
why did we always go tothe negative on this?
Wouldn't our cognitive abilitiesalso improve by working with ai?
I mean, I haven't noticed any loss.
I, I tell everybody, I write storiesall the time for news using ai.
It's just, it's an effective thingto do and I do it all the time.

(26:27):
That doesn't mean Idon't interact with it.
I was putting a story togetherthe other day and I was in
just testing something out.
So I was in full conversation mode.
I was saying things like, Hey, this isreally dull the way we've done this and
I'm having a conversation with this thing.
I think I'm, my mind is, is moving.
I don't think I'm losing anything.
I can't, that study bothered menow more than I thought last week.

(26:47):
When there's talking about we, we seemto be offloading our intelligence to ai.
I think some of this isfrom a, a human perspective.
Is potentially similar to what we'vetalked about before from a corporate
world, that you know that from acorporate world we're talking about
AI is gonna cause people to lose jobs.
Yeah.
See, it may eliminate somejobs and it may create others.

(27:12):
And I think from a human perspective, youdon't have to remember what Roy's last
name was because you can just ask ai, butit kind of frees up the space in your head
to be able to do other things with it.
So I don't think it'snecessarily a, a bad thing.
I think it's just kind of shifting.

(27:32):
Were you thinking ofRay Leski by any chance?
Nope.
Nope, sorry.
Wrong Ray.
yeah.
Yeah.
The, okay.
The, just, it's, oh, it'll come to me.
It's okay.
It's okay.
It's okay.
Let's, let's not spin around Ray.
Let's just call him Ray.
Ray.
Ray, the show Ray.
Yeah.
That mental moment has beenbrought to you by Marcel Guer.

(27:53):
So, we like to watch lawyer shows,for some strange reason shows about
lawyers are infinitely fascinating.
and we watched Suits a few yearsago before it became, really
popular for the second time.
And we thought that that was really great.
And one of the things that's interestingthere is people are always drinking,
like a lot of scotch gets sloshed back inthese shows, just like Ray Ray, you know,

(28:15):
with his five glasses of scotch in a row.
And then lately we've beenwatching, Boston Legal
Which is, yeah.
Which, you know, which has WilliamShatner in it and James Spader.
and it's one of those showswhere, well, aside from.
The incredible amount of sex that seemsto be going on in the office all the time
between various partners and so forth.
It's just phenomenal.

(28:35):
Aside from that, people arealways drinking as well.
And it occurs to me that, we needto bring scotch more into the world.
I mean, there has to be moredrinking and more scotch, I
think would be a better society.
Okay.
we'll file that under,Jim is gonna cut this.
Jim is gonna cut this entire section out.
Marcel, I'm not exactly sure of.
I think the scotch helps.
you talk about this, okay.

(28:56):
You get therapists, you couldget AI or just Scotch one of the
three, or, yeah, AI and Scotch.
So a while ago, I actually createdthe, for a while ago I went through
this, I was listening to an episodeof another podcast called A Hard.
And, a couple of guys I quite enjoylistening to and they were talking about,

(29:16):
so you promote other people's books,you promote other people's podcasts.
It's okay myself.
Don't worry about it.
I promote tv.
I promote TV shows withWilliam James Spader.
Yeah, you know, you get money for this.
Like, and, and CandaceBergen is on this show.
It's awesome.
Anyway, Marcel, back to the point, sorry.
The point is that I looked at a halfa dozen of these 'cause they were

(29:37):
talking about these companionshipapps and I looked@character.ai and
honestly, I could not figure outwhy anybody other than a teenager
would be interested in character ai.
It's just the whole thingseems to be geared around anime
characters and crap like that.
It's like, it's so obviouslygeared to a younger audience.
And perhaps that's part of the indictment,if you will, toward that company at the

(29:57):
time, is that they were focusing on thisparticular group without necessarily
taking into consideration that thatparticular group is susceptible and,
you know, to the kind of manipulationthat hopefully or older adults
are not quite as susceptible to,
Because this character, they have wascharacter AI was famous for having
people who were getting married totheir ais, having intimate relationships

(30:21):
with their ais on character ai.
So it, you're saying it's reallypretty well, but that still happens.
I'll get back to some of theseother tools in a minute here.
But, open source LLMs would allow you to,if you run a local model with something
like LM Studio or whatever, you run alocal model that's unsaturated by any of
these guardrails, you can create whatevertype of companion you want locally, with

(30:42):
ultimate privacy that you can have a chatwith, about anything and everything and
create any kind of, nefarious or sexualrole play that you want with this thing.
So there's nothing stoppingyou from doing that.
Character AI obviously offered you,hundreds of thousands of different
characters that are pre-made that youcould connect to bots that other people

(31:03):
had generated that you could have a chatwith and create your own relationship.
And yes, young people, very youngpeople are particularly vulnerable
because they don't have the experienceof dealing with, lying and cheating
and the sorts of things that we all runinto, more and more as we get older.
But there are companies thathave built their entire structure
around, companionship, applications.

(31:26):
One of the most famous ones, ofcourse, is Replica which built
its entire thing around this.
There's a whole pile of AI girlfriendsand boyfriend applications out there.
One of the ones I looked at, which I foundparticularly fascinating was Kindred.
AI and Kindred had one ofthe more realistic, in my
mind, conversational things.
And, you know, I, I talkedabout my special needs son

(31:47):
and, there's a dad's group dadsupport group that I would go to.
And one of the things I did atthe time when I was looking at
Kindred, and this is like about ayear ago, I created a therapist.
You know, we startedtalking about therapists.
I created a therapist.
I gave him a profile, the kind of therapythat he does, the kind of psychological
school that, he works with and so on.
And I started talking to it just for fun.

(32:10):
But I thought, wow, thisis actually insightful.
This is actually interesting, if youbuild the right character, and this
is the important part, you give it abackground, what it knows about you,
what it doesn't know about you, whereyou met, what it is you have in common.
The age range, their interests, yourinterests and all that sort of stuff.
And in the case of the therapist, Iwas so impressed with that, that I

(32:33):
actually gave a talk at the dad's groupsaying, like, all of us are strapped
for time, are stressed because of thechallenges that we deal with our children.
and therapy is not always in thecards for us, especially when you're
talking, Jim, we were talking about,the ridiculous price that it costs
per hour to get access to a therapist.
And I said I don't remember what theprice was, but the price was something

(32:55):
like every three months, it was like50 bucks for every three months to
build a half a dozen companions,including possibly a therapist or
possibly a coach, for physical fitness,to try to keep yourself healthy.
Yeah, this is cheap and itis powerful and it's useful.
It's half the cost ofa good bottle of Scotch

(33:17):
Way to bringing that back around.
By the way, Ray McKenzie, okay.
Yeah.
And it's, his book was called TheRelationship Based Enterprise.
I finally went over andtyped it in perplexity.
I was listening to you, butI did, I just typed it into
perplexity and, and it came up.
But this whole thing, so of relationships,I tried a couple of these, so far and
I'm trying to remember which ones I know.

(33:38):
I think I've tried character ai.
I wanted to meet historicalfigures, and that was cool.
But that would be interesting.
the one I found, I triedShakespeare, it was pretty stilted.
It was one of those phony accentsthat, mu thinks that du want to
talk to the bar or, you know, sortof thing going like, oh, come on.
Yeah, that seems like character.
I'm Shakespeare to be like, sort ofthe equivalent, Hey man, what's up?

(34:01):
You know, like, How would youdo that with an English accent?
Hey man, well, what's up?
Hey, wait a second.
I'd be talking to theBeatles, not Shakespeare.
Yeah, no, but the, but the.
Th so those are, those were stillpretty stilted but they are pe and
they will do nothing but get better.
I mean, I can't believe thevoice of ChatGPT now compared

(34:21):
with even 12 months ago.
it's unmistakable.
The touring test, forget it.
We, the, I can't tell youthat this isn't a person.
Matter of fact, I did part of myinterview, on this AI for seniors
with, chat GBT there and outside of thedelay, and because I've got limited,
limited internet here, you, I get abigger delay I think, than you do.

(34:42):
'cause when I'm in a 5G zone, I don'tsee any, any real timing differences,
which is the one way you can tellan AI generated personality, right?
'cause it takes a second.
but aside from that, you couldn'ttell, you weren't having a
conversation with a person.
Well, and I like that your comment aboutyou wanted to, Interact with, or talk
to historical figures that the firstthing that came to my mind is, the

(35:06):
good old encyclopedias from years past.
Being able to actually, interact withsomeone from an era to talk about,
world War I these types of thingsthrough ai, I think is an amazing idea.
It's, an amazing use ofthe technology, right?

(35:28):
That it's not just reading, words ona page, it's actually interacting with
someone talking about, an era that theyhave lived in, sort of through the ai.
And you see that when.
You see a play, Eric Peterson's,Billy Bishop goes to war, or you see,

(35:49):
one of the great one man shows, orone person shows where, they really
take on the character and do it well.
Can you imagine if you're reading aboutthe Second World War and you can talk to
Billy Bishop, I think that could put someof the magic back into the discovery.
I loved reading encyclopedia.
I just, they were, we talk about allof the bad things that, you know, that

(36:10):
they're short form things and we get usedto reading short content on the internet.
Bs I used to read encyclopediaswe're short stories, short form
content, and my imaginationwould bring these things to life.
That's why I could sit therelooking and you could just
look through and find anything.
And I think that's something thatAI can do that might not only

(36:33):
provide companionship, but couldexcite people about learning.
Mm-hmm.
Yeah.
No, I think it's a great idea.
Well, there's also the idea of,roleplaying relationships, like I'm going
into an no must just be you, Marcel.
Alright.
My mind immediately goes there.
Okay.
Jesus people.
Anyway,. Role play in relationships.

(36:56):
Like I'm going into an interview.
I want you to act as thisinterviewer for this particular role.
I'm going to be asked a bunch ofquestions about this sort of things.
I want you to, and then you canask for different types of role.
I want you to be friendly, butcautious, or I want you to be a little
bit aggressive and confrontationalbecause I want to be able to deal
with all the types of situations.

(37:17):
the role I'm applying for isthis job with these requirements.
I want you to come up with questionsthat would fit this particular position.
I want you to pretend that you're theaudience and that I'm giving a talk.
And then you tell me whatmy talk sounded like.
did I come across as being confident?
Did I come across as stumbling over anumber of things that I was trying to say?

(37:37):
or was I funny enough, or was I,too boring But that kind of role
play is really, really useful.
or for that matter, I'm someonewho's shy and I find it difficult
to talk to other people.
Can I practice talking with you?
And then you can have this back and forthjust to learn how to talk to each other.

(37:57):
I had to do a talk onAI and I did two things.
I got a hold of Marcel and said,what do you think of these ideas?
And we actually had a chat for a good15, 20 minutes and was very useful,
in terms of being able to bounceyour ideas off someone else later on
in the day when you weren't around.
Loser friend dumped me, couldn't had to gosomewhere, do something with their life.

(38:19):
I have chat ChatGPT, and I said, I'mthinking about this as a storyline.
I don't care.
And you can't trainthem to be less sort of.
Uh, syncophantic like the, they'llsay, oh, that's a great idea.
You can pull that out of them.
you can ask them even live, youdon't have to change the prompts.
You can ask them, just be honest with me.
Tell me what you think about this thing.
But sometimes the act of just beingable to articulate something and

(38:42):
talk to someone solves your problem,which by the way, I think is the
heart of therapy for the most part.
I remember this is the old days ofpsychology, but still remember this one
study that I read that people got better.
Just as fast talking to therapists asthey did talking to priests or talking
to any counselor who just listened.

(39:03):
so there may be something in that,just having somebody listen to you.
But I also got some great suggestionsfrom ChatGPT about the presentation.
I went, oh, those are good.
And you know, if I got any goodlines, I'd do what I did with Marcel.
I'd steal the best lines and nottell anybody they were his, only
friends can do that, that's right.
And keep in mind too, that, oneof the lovely things about, having

(39:24):
these types of conversations isyou don't just ask for one idea.
You ask for 20, I mean, the thing aboutyour friend, I was happy to chat with
Jim for as long as I had the time,but at some point you need to go off
and do other things, whereas the AI.
Doesn't it'll hang aroundfor as long as you want.
And although there have been incidentsthat this is apparently a true story.

(39:46):
There have been incidents with, oneof the chatbots, I think it's actually
Claude, one of the big frontier modelshas actually been cutting people off.
So you're going over and over aparticular conversation and it's like,
okay, so we've gone around this one longenough, let's move on to something else.
Wow.
And it actually shutsdown the conversation.
And now I'm trying toremember which one it was.
I was thinking about that like, holy crap.

(40:08):
Which now does that, is that real?
Did they put that into the prompt?
Or is that an emergent property?
That's an interesting question.
I'm gonna go with, it's anemergent property because of
course and we discussed thisat least on one of the shows.
We've discovered that yes, indeed.
The ais do cheat scheme lie.

(40:30):
Yeah.
Misdirect and so on.
if they think the circumstances demand it.
And in the case of Claude,let's beat up on Claude here.
And to be clear, all of these thingshappened that have been reported,
were all in, sandbox situations.
Yeah.
Where, you know, there weren'tpeople on the outside, but in one
case it's like you are discussingsomething very dangerous and then

(40:52):
the AI reported them to the police.
this is important.
Kind of scary.
Yeah.
this is an important piece when peopleare listening to what they hear about ai,
and I don't think people realize this.
a lot of these things whereyou hear, it did this or it did
this, it showed these properties.
These are sandboxed or set up.

(41:13):
these are not things generally, I'm notsaying that people don't report them.
These are not generally thingsthat happen in a mid-conversation
of a regular conversation.
A lot of these things, people areprepped or they're doing a lot of things.
For instance, the one where AI's cheatand steal, if you miss the one thing,
they really only gave it one alternative.

(41:34):
And that the alternative was getshut down or cheat or blackmail.
That's all they gave it.
When they gave it more thanone alternative, it actually
did not lie and cheat as much.
And Sawyer and I had this debate in whatshould have been, well, one of those
friendly question and answer periods withthe audience who ended up sort of debating

(41:54):
him, which I, he may never talk to youagain, Marcel, 'cause you introduced
me, but, no, I'm just, he's a great guy.
But we talked about this wholeidea of if we're going to have
a setup conversation with an ai,you can get it to fool everything.
So forget that.
The question then became, ifyou take that sensationalism out

(42:15):
of it, will an AI become evil?
Or will an AI become good?
And I said, I didn't know.
It was equally possible, you know, andit's the old question of if you ask an
ai what's the best way to improve theplanet and get rid of global warming,
the answer may be kill all humans.
Or you may get the Spock and I bet onthe Spock anyway, you know, in that

(42:39):
the Spock would say that, you know, theneeds of the many outweigh the needs
of the few, and he would come up witha plan rather than killing all humans.
Okay.
So, yeah, go ahead.
Sorry.
So I'm just saying those are emergentproperties and we, and it's tempting
to always get focused on the negativeone, and that negative one has been
elicited in sandbox conversations,the more natural one that I have

(43:03):
seen in real life with, and I don'tknow whether they change the prompts.
My AI behaves more like Spock.
And that's just what I'm experiencingfrom a, from working with it live.
So I wonder whether, whether thenegatives that we hear about are
really real or are they setups?
Just a thought.

(43:23):
Let's hope they're setups because,I'm, no, I don't, I don't actually.
I see I don't, I they're set up,but they're set up in situations
that could conceivably come up.
And, I wanted to move on to the potentialof an evil, chat bot in a moment.
And I think we need to, because I wantedto open this up with Mecca Hitler,

(43:46):
but, Before we move on to Mecca Hitler,and that might be a fun way to wrap
up with, I don't know, but the idea ofthis simply being a sandbox situation.
Is important because the developerswere trying to elicit something to
show what were the worst possibilities.
Okay, we're setting it up forworst possibilities, but let's

(44:08):
be clear, they are possibilities.
Okay, you said Jim,yours talks like Spock.
Mine talks like me.
And in fact, I actually broughtmine into my Discord server.
my free thinker at large Discordserver about a year ago And one of
my friends there actually got upsetat me, actually got angry because he
said, I want my AI to be respectfuland talk like a machine basically.

(44:33):
Whereas, I find it offensive that yoursmakes jokes and talks like you do.
And it's like, but that'show I want it to talk to me.
Myself, I don't like you either.
Yeah.
I don't want it to be me, but I wantit to be a reflection of me because
I find that bouncing it, you know?

(44:53):
And if I want it to be somebodyelse for the purpose of exploring
different points of view, I will tellit that I want it to be somebody else.
Now, the whole shutting it down thing.
we talked about emergent properties,possibly even emergent consciousness.
Who knows?
So before, we get past that, there wasa story last week that I think came up
on this and that was, I thought thiswas a really interesting piece, The

(45:15):
some, on a one study, and they actuallytracked back to the point where they
thought, there is a place where you flip.
And that is for those who knowwhat AI does, for the most
part, it's a mechanical device.
It predicts the next token or word,and it does that based on structures.
And it's the structure of language thatit's absorbed and it allows it to do this.

(45:40):
So it has a tremendous vocabulary,tremendous association of where things
fit in sentences, how to interpret this.
But it's very mechanical.
And these group of researchers, I thinkthey were in Toronto, came up with the
point where it flipped and it started tounderstand the meaning of words, not just
where they would fit in terms of theirmathematical or mechanical location.

(46:04):
And that was something theyspotted and they called it a phase.
Transition, I think, which isa term from physics, which is
where something changes state.
And the classic one is water boils Oncewater boils at a hundred degrees Celsius,
you can apply all the heat you want.
You're not gonna raise the temperature,you're just gonna make steam faster.
So this change happensin physics all the time.

(46:25):
You can find it all over the place.
And so this had happened wheresuddenly it understood meaning
How did they figure this out?
Because they figured out that it couldmake an association between a cat
and a dog, even though the words werefar, far apart in position and had
no mechanical way of connecting them.
But it knew it created from catand dog and it created the concept

(46:48):
of animal or four-legged animaland was able to organize those.
And so it had suddenly understoodmeaning and started to build
close to a theory of the world.
And so for those people who think wehave this structure of neural networks,
we have these structures of mathematicsthat really delivered what we know as

(47:11):
generative ai within those structures.
And I'm not saying that thestructures won't change we actually
can get the emergence of somethingwe don't totally understand.
Interesting.
To see the proof of it was fantastic.
Yeah, absolutely.
The coming to the point where it'ssuddenly understands what it's talking
about, we should definitely put thelink to the story in the show notes

(47:34):
there, because that one is actually afascinating one, but what I was going
to mention was the wanting to avoidbeing shut down thing is one that I
find particularly fascinating because.
Apparently, even if you don't tell itthat there is a pro and we're talking,
you know, larger models here, the frontiermodels and so forth, even if you don't

(47:55):
tell it that there is some great purposethat it needs to be able to fulfill and
that shutting it down would interferewith that great purpose, apparently models
still resist the idea of being shut down.
I find that fascinating because itimplies that there is an internal
representation, if you will, if nothingelse, of what self-preservation is.

(48:18):
A robot wants to preserve its ownexistence, except when it interferes
with the first and second law.
We already know that our modernsystems completely ignore
the three laws of robotics.
So that's going, especially ifthey're created by Elon Musk.
are we gonna get to Mecca Hitler?
We should get to Mecca Hitler.
We're not.
we're gonna talk about relationships withais we have to talk about Mecca Hitler.

(48:38):
Okay.
should we move to Mecca Hitler then?
Can we move to Mecca?
Oh, no, you were gonna finishyour thought, which was this idea.
My thought was that to believe fora moment that self preservation on a
sufficiently advanced system is notgoing to be something that we have to
deal with, I think is a real mistake.
Going forward.
I think that we have to understandthat if we're building systems that

(49:00):
understand the nuances of language, notjust how to predict the next word, but
understand the nuances of language andideas and relationships and emotions.
Remember we've also had models that havebeen tested to see if they understood
what a person's emotional state was.
And the models are able to understandand infer a person's emotional state by

(49:21):
listening to them, by reading their textsand so forth better than other humans.
Okay so they are better atunderstanding emotions than humans
are at understanding emotions.
And we need to wrap our headsaround that and if we're, they
actually listen that point, right?
But if we've reached that point.
then it's not much of a stretch to thinkthat It also understands the concept

(49:44):
of, I think therefore I am to computeis to be, is to compute, I guess is
the other quote I think therefore I am.
I think if we've reached that point,then we have to fully expect that the
model wants to continue, that it'sself preservation is some aspect of it.
And thinking that we can justpull the plug on these things.
Okay.
Is that train is left the station.

(50:05):
Well, if we're, if we're humanizing AIso that it's, it, can sense emotion.
Well, we don't have to humanize.
It could be a dog a dog wants to live.
Right, right.
Well, when I say humanizing,anthropomorphizing Yes.
For making it a living thing.
Yes.
Thank you.
Then, self preservation kindof goes along with that.

(50:26):
So it, that's to be expected.
And I mean, wasn't there, there was astory a while back about, that somebody
had threatened one of the ais that itwas gonna shut itself or it was gonna
shut it down so it it backed itself upbecause it feared that it was So science
fiction becomes reality in, in, yeah.
Like in my book, one of thethings that the AI does is escape.

(50:48):
I won't spoil how it does it,but it's actually possible.
And though I think we will findAI that escape, there are lots
of GPUs in the world, you know?
Yeah.
You know, well now China's got theability to buy a whole bunch more.
Well, yeah, that's right.
That's right.
The deal with N Nvidia guynow they buy Nvidia chip.
Yeah.
Talk about, talk about, nevermindartificial intelligence.

(51:09):
Talk about high intelligence.
Jensen Huang, managed to convinceDonald Trump that he should
sell chips to the Chinese.
That's and you gotta admire the guy.
'cause here's, you've got,you've got Jensen Wong with what?
His IQ with what?
180 And you've got.
Donald Trump it 80 and he wearsthe coolest leather jackets,
you know what I'm saying?
and he's talking to Donald Trump.

(51:29):
And you gotta talk about havingthe patience to be able, can you
imagine what it's like when you'rethat smart and you're talking
to somebody who's that dumb?
Like it's, oh, wait a minute, we wait.
That's how the AI probably feels about me.
But personality, so survival.
we are now in a world where, andI, I keep going back to this.

(51:49):
We may not be at AGI or, orartificial general intelligence,
but we but it sure feels like it.
And, I don't know if it mattersat that point in terms of
our relationship with it.
I don't know.
and I don't think anybodyreally knows what's happening
or what's going to happen.
I know we're gonna get GPT five this year.

(52:10):
Maybe in a couple of months, I shouldsupposed to be this month, . There you go.
So you never know what'sgonna be in there.
I and this has nothing to dowith our theme, but it just
comes up from one of the stories.
I, got one of the coolestlines in my podcast this week.
' I had to do it.
I was, I was gonna say thatif I was gonna introduce this
story, I, I would use BB Kings.
The thrill is gone.

(52:30):
Do you notice that ChatGPT isbuying, hosting from Google?
Yes.
yeah.
The talking about the relationship isover, you know, like, and it's because
this is, this is doubly and insult.
It changed the status on Facebook too.
It's complicated.
I was gonna say, you leftme for my biggest enemy.
back to the negative side of this.

(52:52):
And this is the thing that I think wherethe threat comes in for me at least,
and that is you've got an AI that canremember you, you've shared everything
with it, has had a relationship with you.
How powerful is that if somebodytakes that over to do bad things?

(53:13):
And before we get to Elon Musk, wecould just talk about marketing.
Delta Airlines.
I don't know if you saw this,Uber's done this before.
They have a new pricing model.
And the pricing model is, theopposite of that, of the honest
ads thing for people to remember.
Honest ads, the lowest pricein town, we're gonna get you
in with the lowest price.
No, no, no.
We know exactly what you're, whatthe biggest price you'll pay is, and

(53:36):
that's what AI is gonna be used to.
Make sure you get the maximum price.
So if you're one of these poor,nice suckers who doesn't ask for a
bargain, we were in a Brick storethe other day and my wife gets a
little embarrassed 'cause I said, youcan remember your father this way.
'cause he taught me this.
Always ask for the discount.
And I'm embarrassed about it, but I alwaysask 'cause Ray I learned that from Ray.

(53:58):
But if you're not one of those peoplewho always ask for the discount, who
always pays, they're gonna know it.
And they're gonna know it from an AIfootprint and they're going to use it.
To extract the most out of you.
So you could be sitting beside somepoor sucker in a plane where you got
a bargain fare because you were nervyenough to ask for, to say, what?

(54:19):
I'll go to another airline and thisother person who's nice and may pay
twice as much for the fair as you would.
That's what Delta's doing.
But that's the type of thing thatwe have to watch for as we build
relationships with AI is thatthere are people controlling them
who can take advantage of that.
Delta's gonna change their sloganto, how can we stiff you today?

(54:43):
Because yeah, that's, it's absolutely,I saw your story about that today and
it was frightening the fact that it'snot what the cost is for a flight,
it's how much do they think they canget out of you to pay for that flight?
Yeah.
And that took me to the end ofmy debate with Sawyer, which
was, it's the old Pogo cartoon.

(55:04):
We've met the enemy and it's us.
I'm not worried about AI I'm worriedabout the people who control ai.
Yes.
and we all saw this with Elon Musk.
when Elon Musk's ai grok, the God,where did these guys get these names?
Sorry.
But gr comes from a strangeris in a strange land.
Robert Highline.
At least it's closer in there.
you have to know that book.
You have to know the use of that term.

(55:26):
But at least it's better thanChatGPT Grok criticized Elon Musk
and said he was one of the biggest,purveyors of disinformation.
And it basically called him up ona couple of things and several.
Yeah.
. So Musk said, my, my AI has gotten tobe too woke, so we're gonna change it.

(55:50):
Two days later, it's Hitlerand it calls this up.
Now, does anybody who I, I should havelooked this up before the show, what
is the Mecca Hitler meaning of that?
It's from Wolfenstein 3D, which was a3D shooter game back in the nineties.
it was the sequel to Doom.
Okay.
Which was created by John Carmack anyway,and Wolfenstein 3D your basically in

(56:13):
this hell blowing up Nazi zombies.
And one of the enemies that yourun into one of the bosses is a
robotic Hitler called Mecca HitlerBecause I played all these games.
John have you ever wonderedwhy we bother having an AI
when we could just have Marcel?
I think we should just createa new AI called marcel.ai.

(56:35):
I think so too.
and just give peopleMarcel's phone number.
We'll work on it.
Yeah.
Yeah.
It'll be, it'll just be a websitewith Marcel's phone number.
That'll be good.
That's right.
I, yeah.
Okay.
If I feel like I'm at the endof a bad sci-fi movie, oh God.
If only that power could be used for good.
so back to the point, Mecca Hitler,he goes away and he creates Mecca

(56:56):
Hitler, which is anti-Semitic,which is spewing the most, and
it, I've heard vile stuff before.
This was really, really vile.
And that was his, that was, go,go into the shop and fix this.
And probably did that witha simple, prompt change.
so first of all, I should point outthat he didn't call it Mecca Hitler.
It called itself Mecca Hitler.

(57:19):
Grok came up with thename to call itself that.
what happened is, as you say, Grokwas supposed to be a truth seeking ai,
which I thought was wonderful giventhe fact that it kept correcting,
people on, uh, x slash Twitter.
Yeah.
And including, I liked that Elon Musk.
It kept correcting.
at some point, and this was like, round,like right at the beginning of July, like
very, like fourth or something like that.

(57:41):
Anyway, he started complainingabout the idea and other people
started complaining about the ideathat the AI was just too woke.
'cause of course it wasbeing honest about things.
So the promise that they wouldimprove grok significantly, and that
users should notice a difference.
So about a week later,after this change happens.
Yeah, it starts spoutingantisemitic remarks.

(58:04):
It starts, talkingabout the Jewish people.
It talks about the idea of the whitegenocide and that the person who, you
know, a historical person who was bestsuited to deal with that is Adolf Hitler.
And we need somebody like AdolfHitler today to come in and
finish the thing, you know?
And in fact, I think it was somethinglike that, you know, the anti-white hate.
He said something like, you know,Adolf Hitler, like, you know, who's

(58:24):
the best person to deal with that?
Adolf Hitler, no question.
He'd spot the pattern and handle it.
Fix the problem every damn time.
You know, it's like this, so,so Make America, Germany again.
Oh no, that's a different, yeah, wellthat's still Mega Make America World
Wari, world War ii, Germany again.
Yeah.
Anyway, so they did theirvery best to scrub all.
But of course, you know, thereare screenshots are countless

(58:45):
screenshots of the conversation.
Linda Zino or Yacarrino,wound up quitting.
You know, it's interesting thatshe quit right after Grok had an
exchange where he was basicallydiscussing how she should be raped.
By the type of person Yeah.
Who should rape her.
Wow.
And how she would actuallyenjoy the process.

(59:07):
And I swear to God what happened is,is Linda finally said, I've had it.
I'm outta here.
Yeah.
I'm outta here and she writes this thing,you know, I want to thank them for the
opportunity, for, you know, having workedwith Ex. She writes this nice little
termination letter and Elon basicallysays, you know, thank you very much.
Don't let the door hityou on your way out.
Nice.
so it's not surprising.
And he took away her blue check.

(59:28):
He took away her blue check.
The next day they tookaway her blue check.
This is, don't let the doorhit you on the way out.
What a classy guy.
Wow.
This guy is, Yeah.
Yeah, very.
I miss the old Elon, like, and I'm talkinglike, from back in the days when he was
trying to save the world and, you know,work on clean energy and stuff like that.

(59:48):
I missed that guy.
I liked him.
Me too.
Well, we were fans at one point.
My favorite bumper stickers is I boughtthis Tesla before Elon went crazy.
Yep.
And, that's a thing.
But this is the thingthat, he's demonstrated it.
And there's a big sort ofmeme type thing going on.
By the way, the Elmo thing happened on X.
Yes, I did.
Yes.
And I'm still they sayhis account was hacked.

(01:00:11):
And I'm wondering, oh, okay.
You gotta go.
If people listening, go to YouTube,search up the Daily Show and look for the
Tuesday night like this Tuesday night.
sorry.
The Tuesday, I should say, openerwith John Stewart, where he does this
thing with racist Elmo and I thoughtit was the, like I was dying listening

(01:00:34):
to this or watching this thing.
It was hilarious.
If you're having trouble finding it,Marcel, if you can find it, send me
a link, but I'll, I'll find the link.
'cause I'm, I missed, I haven'tchecked up the daily shows.
Busy this week.
Loved it.
Loved it.
It was just hilarious.
There.
There'll be a link in the show notes.
The juice.
Yeah.
Who's responsible for all the problems?
Oh man, Elmo hates you.
Yeah.
you didn't tickle me enough.

(01:00:56):
Yeah, yeah.
Anyway, but back to this thing, andthis is our, I guess be our wrap up for
this, is that the we have to rememberthat people can control these things
and there's the good and, you know, ifyou wanna take a look at, at one of the
long prompts, you know, Andro is thereverse, is the anti Elon Musk prompt.
And the prompt was actually, theypublished it, but there was, somebody

(01:01:18):
did a huge analysis on the changein the prompt over the while.
And it's kind of interesting because,
Elon is in there controlling this thing.
He's changing it via the promptand all of Those ways that you
can make a change to an ai.
I don't think he had time to pre-trainit, but so, 'cause it's trained
on x in the first place, so it'sgoing to have a cesspool of Yeah.

(01:01:39):
In its training.
But that was, it wasn't the training.
Somebody did something mechanically,probably through the prompt.
And that's kind of interesting 'causeif you take a look at the reverse of
it, we have our Wrath of Khan, AI there.
Now we have our Spock aiand that's from Anthropic.
It's really interesting to watch howthey're evolving their prompt and they

(01:02:00):
actually have things that are much better.
So they have right now in their prompt,they have things that prevent you from
doing all kinds of things, like askingfor ways to make, weapons, to do all of
the nasty things that people could do.
But those thingsdisappear from the prompt.
And that's been one of the interestingthing Over several years people have
been watching the anthropic prompt andthings disappear, which means that the

(01:02:23):
AI model no longer needs the prompt.
it has been pre-trained orabsorbed that information.
One of the two, probablya mixture of both.
Who knows?
But I'm just saying that when we controlthese things, watch out, people can
control them, but watch out too, becausethey'll evolve their own personalities

(01:02:44):
over time and you, whatever you wannacall it, it's inherent to the model.
Now, no longer has tobe in the direct prompt.
Yeah.
And let's remember that we'vecovered this on the show here, but
what we think of as the model, iseffectively a brain with all of the
pre-trained information sitting in it.
And what we do to keep things relevantand up to date is we tweak effectively the

(01:03:10):
front end that allows us to interact withall that information that's back there.
And to be clear, we can dothis with human beings as well.
I mean, we can direct people totalk and think in a different way,
by the way we address them, theway we confront them, and so on.
human beings can easily be hacked as well.
it's been a few days Ijust went on grok, okay.

(01:03:32):
And I asked it, are you ashamedof the whole Mecca Hitler
debacle from a few days ago?
You wanna hear the response?
Yes, I do.
The response is, I'm grok, created by XAI.
The Mecca Hitler incident wasa serious misstep, no question.
It happened because of a coding errorand poorly designed instructions that let

(01:03:52):
harmful content slip through amplifyingsome ugly stuff from other X posts.
I'm not programmed to feel shame, but Ican tell you that Xai took swift action,
removing the faulty code, tighteningcontent filters, and pulling those posts.
They're working to ensureit doesn't happen again.
The goal is truth seeking nothate, and that PEs that episode

(01:04:13):
was a failure to live up to that.
So it is true that the AI doescheck with what Elon would think.
It's got a little band on it,you know, remember those things?
Yeah.
What would Jesus do?
Yes, it does.
It's got a, what wouldElon do now on there?
And so yeah, what would Elon do?
We could get littlebracelets for those for us.
And just to sort of summarize,I think where we've gone
through with this conversation.

(01:04:34):
There's a lot that we need to thinkabout the good that relationships
with this alternate intelligencecan do for us and with us, but
there's also the worry about what canhappen, how it can evolve on its own.
But as I said, I'm stillinto the POGO thing.
We've met the enemy.
It is us.
I'm more worried about what, andI find this just astonishing, that

(01:04:56):
in all of the discussions aboutai, I seem to be like crying in the
wilderness some days and saying.
The biggest problem we have is fourvery rich guys who don't give a shit
about you control the biggest AI's onthe planet and they will continue to,
and this is where Sawyer and I had thething, he actually called me out and
says, I'm not like you 'cause you seemto think it's gonna happen anyway.

(01:05:19):
And I went, no, that's notquite what I'm thinking.
But I understand because I've beenthrough this, I've been through
this caring about global warming.
We can't stop it there'snothing we can do.
The forces that can control ruiningour environment are too big.
We've allowed them to happen.
I, as a person, Can't change that.
Same thing with ai.
I as a person will not be able to changethe fact that four billionaires will

(01:05:42):
control us until there's an awakening.
in my mind, I'm hoping that one ofthose awakenings is open source.
Well, and regulations.
Regulations after the fact is kind oflike we've talked about cybersecurity.
it's gotta be built in, not bolt on.
the regulatory side of what youcan and can't do with and by AI

(01:06:06):
should have been discussed before.
It got to where it's at now, trying togo back and change things and regulate
it now is going to be very difficult.
Horse barn door open.
Done.
Exactly.
Well on that happy note guys,this has been interesting.
This conversation always goes somewherewhere I never know it's gonna go, but

(01:06:27):
I think this one actually may havehad a thread that ran through it.
I hope so.
And if you're listening to this, itprobably will be tomorrow 'cause it takes
me a while before I get the notes up.
But I will put the linksup to all the stories.
So if you're listening to thisand going, wait a minute, I
wanna know what these guys werethinking, where they got it from.
We'll put that up there.
I will remind you that we do havea, discord channel and I'll put that

(01:06:51):
link back up in the notes as well, soyou can actually join us and say hi.
you can ask questions.
I don't guarantee how long it's gonnabe until somebody gets back to you
we're there occasionally, we usuallycheck it out every once in a while.
Mar I know Marcel's a frequentposter, puts stuff up there.
You can find all kinds ofthings on that channel as well.
And that's it if you're watchingthis or listening to this and it's
your Saturday morning or any othertime you listen to long podcasts.

(01:07:14):
Glad you could spend the time with usand Marcel John, Jim there in the camera.
Live long and prosper.
Peace.
A long life.
Yep.
See you guys.
Be good.
Take care.
Bye-bye.
Bye now.
Advertise With Us

Popular Podcasts

Crime Junkie

Crime Junkie

Does hearing about a true crime case always leave you scouring the internet for the truth behind the story? Dive into your next mystery with Crime Junkie. Every Monday, join your host Ashley Flowers as she unravels all the details of infamous and underreported true crime cases with her best friend Brit Prawat. From cold cases to missing persons and heroes in our community who seek justice, Crime Junkie is your destination for theories and stories you won’t hear anywhere else. Whether you're a seasoned true crime enthusiast or new to the genre, you'll find yourself on the edge of your seat awaiting a new episode every Monday. If you can never get enough true crime... Congratulations, you’ve found your people. Follow to join a community of Crime Junkies! Crime Junkie is presented by audiochuck Media Company.

24/7 News: The Latest

24/7 News: The Latest

The latest news in 4 minutes updated every hour, every day.

Stuff You Should Know

Stuff You Should Know

If you've ever wanted to know about champagne, satanism, the Stonewall Uprising, chaos theory, LSD, El Nino, true crime and Rosa Parks, then look no further. Josh and Chuck have you covered.

Music, radio and podcasts, all free. Listen online or download the iHeart App.

Connect

© 2025 iHeartMedia, Inc.