All Episodes

March 5, 2025 81 mins
Mark as Played
Transcript

Episode Transcript

Available transcripts are automatically generated. Complete accuracy is not guaranteed.
(00:00):
[clip] My belief, my personal understanding is it's already in place.

(00:16):
The resistance is already there and that's making sure that the checks and balances.
You're not going to have a nefarious actor just go out and build this AI model.
It's going to be watched and regulated, cared for.
It's going to be done in public, in public view.
So there's not someone building a, well, I don't know, the atomic bomb was built in secret.

(00:40):
The Manhattan Project was completely secret and quiet.
They couldn't talk to anybody.
No one talked about it.
They finally got the technology out.
But to talk now about AI isn't that there is some backroom people potentially doing
that.
[introduction] So, you have a minute?

(01:00):
Yeah, I do right now.
Now let me work with, introduce this conversation by saying if anyone's listening, and for those
of you who are listening, this is just a conversation between my daughter and myself about important
things which may be important and may not be important, but whatever it is, we're having

(01:21):
fun talking about it.
And we invite you to join the conversation and hopefully you'll have as good a time as
we do.
And thank you for joining.
Do you remember where we started?
Why we started a podcast?
We started it just to be part of the great conversation.
Yeah.
I think.

(01:42):
But what was the catalyst?
The catalyst.
It was because we had these conversations all the time anyway, and we might as well make
a recording of it.
Yeah.
And we were like, you know what?
Well first we wanted to see if we could get other people talking to us too.
The problem was I was always coming to you with my life problems, and then we'd talk

(02:05):
for two hours about it.
And then we end up thinking, well, it's too bad no one else could hear any of that.
Yeah.
It didn't help anything else.
It wasn't part of the world or anyone else's life.
Anyone else's.
We could have our conversation with someone else.
But if we record it and put it in here as something that we discussed in depth, then

(02:29):
anyone can join that conversation and then tell us their five minute blurb at the end.
Yeah.
Yep.
Okay.
Well, good.
[main conversation] Our topic today, by the way, is that type of thing, the singularity of this conversation.
Okay.
So we're talking about singularity and what it means.

(02:50):
We talked about dualism a few weeks ago, and as that brought up the point, everything is
dualistic, everything has one side and an opposing side.
Is there anything singular?
So that's where the question started thinking, what is singular?
And I've heard the word singularity, the term singularity, but what does it actually mean?

(03:11):
So if we discount the idea of everything has a not everything, I mean, there's me and then
there's not me, then I am a single person.
We covered that idea.
Yeah.
We just have to remove that idea because everything has...
We have to see if singularity is a valid word because if everything does have a not everything,

(03:37):
then there can't be a singularity.
So how can you validly use that term?
So that's what I'm trying to puzzle with in it.
I've found some ideas on it.
Okay.
So the first thing, the generic general way you use it, you say, this has been a singular
conversation.
So it's something to be singular.

(03:58):
One of the definitions is it's peculiar, it stands out, it stands up against anything
else.
It still has a dual.
There's a worse conversation it could have been, but this is our singular best conversation
we've ever had.
That's interesting that that's a definition of it.
I mean, it stands out, it stands above, but I don't know if...

(04:20):
What is it?
Actually, particular privilege, disconnection.
So something that's privileged or particular about this.
Just had it.
Like the singular opportunity, I happen to be going...
I'll mention this in this thing.
I probably won't go up and see Jordan Peterson, but I could.

(04:42):
I'm going to be at a venue that he's at tonight, and I could have a singular opportunity to
shake his hand, and I may take it.
Who knows if he's still doing that?
Or the singular opportunity to ask a question in front of the microphone.
So I may take that as well.
I haven't found the right question, but maybe it'll gel was on there.

(05:05):
Right.
Are you going to take notes on everything he says, or are you just going to take notes
on what you feel he's actually saying to you?
Speaking to me.
Yeah.
It's going to be what he's telling me.
Yeah, what he's telling you.
Yeah.
Everyone's going to have their own notes.

(05:25):
Since we discussed that type idea, that concept, chances are even this broadcast will be on
YouTube at some point.
I mean, he records everything he does.
He does.
And you'll be able to find that event somewhere.
So I'm not worried about getting all of his words, but getting the feeling of the place
and what I'm hearing as what we've been talking about.

(05:48):
And we've got, what?
This is episode, you had it listed here, 27.
Yeah.
27 two-hour conversations.
And with that, we've gone through a lot of stuff.
Right.
And there's still more.
We haven't exhausted all of my life problems for sure.
You're not resolved.

(06:09):
You're not, well, and we'll talk about it.
You're not at a singularity yet.
I'm not.
So that's one of the issues of singularity.
But this one, the first one, it's peculiar and separate above or set apart from everything
else.
And that would be a singular opportunity to meet in person a celebrity.

(06:30):
Right.
So that is an opportunity that it stands out for sure.
Have you met a celebrity before?
Who have I met?
Rex Allen.
I've talked with him a few times, but he was famous in the eighties, seventies.
Yeah.
Anyone recently?

(06:50):
No.
I can't remember.
I feel like there is somebody famous that I met, but I think maybe the most famous people
I've met are just big in the bag piping world.
Like I've had the opportunity to go out for drinks with a couple of top pipers from the

(07:14):
Midwest area.
And that was fun.
Like just hanging out with them with a few other people.
Like they know me.
With some legends, the legends of the area arena.
Yeah.
So that's probably the closest that I can remember.
I've been in the same room as Joe Rogan, but it was a really, really big room with like

(07:37):
20,000 seats.
Yeah.
That's like as close as I got to Jay Leno.
I was in a room with him before.
Did you see Jay Leno?
That's cool.
Yeah.
Oh, and who was it?
Robin Williams.
I went to a Robin Williams concert when I was in college.
A concert or show?
A standup comedy trip, which was, it was fun, but I was just in the crowd.

(08:00):
Yeah.
So I didn't meet him.
I didn't meet Jay Leno.
I haven't met any of those people.
You didn't meet Joe Rogan either.
I didn't meet Joe Rogan, no.
So one of these days you'll be on his podcast and you'll talk for a few hours.
Yeah.
There's some local musicians that are very popular in this area that I have met and spoken

(08:20):
with and sat with and shook hands for just like a minute or two, but they're not famous
enough.
So would you call that singular still?
I mean, were they big enough that you'd say this is a singular opportunity or do a lot
of people have that opportunity?
If I make the world small, then it's a singular opportunity.

(08:45):
In my world, in my world, it was a singular opportunity.
Okay.
And you appreciated it.
It is something that you can appreciate.
So maybe something that sits higher in your memory because it's singular.
Yeah.
So based on importance singular as an important status to an individual, that's a valid use

(09:05):
of the word.
It was a singular event, was a singular opportunity.
Okay.
But there was also not that opportunity.
The times when I wasn't sitting at a table with someone famous.
I think the only reason you couldn't use that is if something else occurred that was more
singular.
That was more.
Okay.

(09:26):
So it's like the apex of a pyramid.
This is my singular story of a famous person.
Whatever that scene is right there.
Yeah.
You get up to the top of the pyramid.
This is my singular point of the pyramid.
But at some point, you're going to meet someone else and say, okay, that's just normal now.

(09:47):
My singular point is when I met Michael Jackson.
Maybe someday I'll meet Michael Jackson.
Yeah.
That'll be difficult to do now, but it's not impossible.
We talked about that.
Spirit guides and stuff in the past.
Yeah.
He's sitting in purgatory.
Spirit heaven is somewhere.

(10:09):
He's sitting somewhere waiting to meet all of his fans, right?
Yeah.
Yeah.
He's on his throne there and welcoming people as they die.
As they die?
I don't know.
Okay, a singular event then is something that you single out.
You can make an event singular based on your experience.
Your perspective has singular events that happen.

(10:32):
Okay.
Maybe a singular meal that you've had.
I have one of those.
I had a singular meal event right after I was married.
My wife and I went to Charles.
It was an expensive restaurant.
We probably paid $50 for our dinner.
It was called Charles?
In the 80s.
Yeah, the name of it was Charles.

(10:55):
In the town we were born in.
It was just a restaurant, but it was one of the fanciest there.
We drove up in the valley, took our car, and we walked in and had a $50 meal.
It's interesting.
I mentioned that it probably only cost $50 because now we go to dinner at a fast food
restaurant and we can spend $50.
Right.
It's the 80s dollar.

(11:15):
Any restaurant is going to be at least $50.
What was the 80s dollar worth?
It could have been...
Whatever.
Maybe it's a $150 meal now.
Yeah.
$200 meal for today.
It was good.
What did I have that I felt was a singular dish?
That was the crab.
Crab cakes, I think is what they called it.

(11:37):
It was a crab dish inside the shell of the crab.
It's kind of cool.
Oh, interesting.
They served it in a little crab with the food, but I know it was expensive and I enjoyed
that.
I really don't even know what it looked like or what it was made out of, but it was crab,
I remember.
Not imitation crab for sure because it was...

(11:59):
They had it in the shell.
Yeah.
It was real.
It was expensive.
It was good.
We enjoyed it.
It didn't change our life one bit whatsoever, but it's a singular event that I remember.
Because I remember it, I hold it in that class.
I do hold another one.
I had lobster in Duluth, Minnesota at a roof restaurant, which was fresh out of the Lake

(12:21):
Superior.
Oh.
They had it fresh.
They said, fresh lobster.
It tasted better than anything else that I've had.
That was another singular.
Can you have two singulars?
I don't know.
What would be my singular best?
Probably the crab still.
Still the crab.
I just...
Yeah, the comparison would be.

(12:42):
Because of your company.
Because of the company.
I've had, I think the singular meal I could say was haggis.
I had haggis at a Robert Burns dinner in Scotland.
What they do traditionally with that is they bring in the haggis, but they have a bagpipe

(13:02):
or play a song to bring it in.
Robert Burns wrote the... He was a poet, right?
He wrote Auld Lang Syne.
I think they may be played Auld Lang Syne to pipe in the haggis, but they probably didn't
because we all sang that at some point in the night too.

(13:23):
But haggis is just a mixture of all kinds of interesting parts of an animal wrapped
up in its intestine ball.
That you normally would throw away in the trash.
The waste, right?
That they put it in the food and said, this is good to eat.
Eat this.
Eat this.
That's the thing.

(13:43):
It was good.
Though, I mean, it wasn't bad.
It was spiced enough.
They put all kinds of spices in it to make it flavorful.
Singular though.
What was I thinking?
Ha sugar.
I lost it.
Oh well.
Okay.
That's all right.
So you got singular events you choose, and I can say it's singular because I feel like

(14:06):
it is.
So that's what happens and you can use that word.
So you're using singularity.
You can't lose singularity.
There's a singular event.
Singularity is talking about a status, right?
An event.
So the thing that you most hear about in singularity is technological singularity nowadays.

(14:26):
Okay.
Hold on.
I remembered.
We'll come back to technological singularity.
The thing I was going to say was something is amazing if it happens once or if it happens
all the time, but not if it just happened some of the time.
Like Yellowstone, Old Faithful, it happens consistently.

(14:51):
It's got a schedule and that's amazing.
Every eight minutes or something.
Right.
That's amazing.
If it went once, like Mount St. Helens eruption, one time thing, that's it.
That's amazing.
But if Mount St. Helens erupted once and then another time, then it's not so amazing.

(15:11):
It's like, that's just a semi-active volcano.
It's not a big thing.
Right.
And it's not like Old Faithful.
Volcanoes, volcanic activity can't be a singular thing because it happens.
We know why it happens.
It's connected.
It's there.
There are volcanoes going there right now.
Right.
Hawaii, it's just not.

(15:33):
Yeah.
So the thing is, I went out for sushi with your oldest son and my oldest son together.
We went for sushi on Tuesday.
We went to the best sushi place in the area.
Right.
That would have been a singular event if I hadn't been there a whole bunch of times with

(15:58):
him already.
So if that was the one time I ever got to have sushi at that restaurant, that might
have been my singular sushi experience.
Was it revered?
Did you appreciate it?
I think the experience was a good experience, but it wasn't an amazing experience because

(16:24):
it was common.
And so if you were having crab all the time, then that one dinner maybe wouldn't have been
as important to you.
Right.
And maybe you have a distinguishable palate or something.
I could tell the difference on that crab meal.

(16:45):
I know that that was different than anything I ever ate in my life.
Okay.
Of course, I was young.
Yeah.
I was half as old as you are now.
So I didn't know anything, but it was the most amazing thing I tasted.
That affected me.
That was a singular feeling of this is interesting food.
The same sushi type stuff that I ate there, I couldn't tell, though it was more expensive

(17:09):
than any other sushi I ever bought.
Right.
I couldn't tell the difference between any other sushi I ever ate.
Between that restaurant and the grocery store sushi.
Right.
And the grocery store.
It was all the same to me.
I couldn't, you know, you're paying for the atmosphere.
And maybe someone with a more distinguished palate would know there's a lot of different

(17:32):
stuff in this.
That's why it costs so much.
Yeah.
But I didn't know that and I couldn't tell it.
So it's not singular to me.
Right.
Okay.
Technological singularities, though, is that where you were going?
Technological singularity.
That concept is relation to AI.
So artificial intelligence at some point, and they've been talked about it for years

(17:55):
and it's still talked about.
And some say it's going to be as soon as 1940 that that'll happen.
And others say it's going to be centuries and it really can never occur.
That the singularity means that when all knowledge is encompassed inside of an AI model and that
AI gets smarter than us and starts regenerating itself, and then we become irrelevant.

(18:17):
Humans become irrelevant based on the technological singularity.
Now that's the role.
No human mind is ever going to be able to beat it or combat it or help it.
Now you said 1940, but did you mean 2040?
Oh, that's right.
2040
That's what I meant.
Yeah.
I was like, oh, we've been thinking about this for 60 years, huh?

(18:41):
Yeah.
I lied.
I don't even know why I lied about that.
You had no purpose?
No purpose in lying about that.
2040
That's what they say is the soonest, but then everyone's talking about AI right now.
AI, they're saying 2025, the year that we're in currently is going to be the largest boom
in artificial intelligence.

(19:02):
And just with all the investment that's going to that and the ability to do it.
And there's a fear against technological singularity.
So we need to talk about why it is.
Should we be afraid of it or should we be excited for it?
Well were they afraid in the past, was there fear around airplanes?

(19:24):
Yeah.
If man was meant to fly, God would have given us wings.
Yeah.
Given us feathers.
Maybe there are some people that hated the idea so much that they never flew in an airplane
when it was available, when it started becoming more common.
I can't imagine the transition from not traveling on an airplane to everyone's traveling on

(19:49):
an airplane.
How did that happen?
Because I mean, at some point we were all, I don't know, road trips.
Did people take road trips in their old Ford motor vehicles, automobiles?
Yeah.
Well, it was the same thing with Henry Ford when the automobile came out.
People didn't want to get rid of their horses, their buggies.

(20:10):
That's the thing.
This is safe.
Two miles an hour is safe.
I'm not going to go 35 miles an hour in your crazy machine that's going to blow up on me.
And now we go 75, 80, 90.
It depends on who you are, how fast you want to go on the freeway.
120
Yeah.
My speedometer goes to 140.

(20:31):
I've only been to 120, but it goes to 140.
Maybe I'll test that someday, but maybe that'll be my last day.
Maybe that'll be the last day.
Yeah.
I don't know that I'll test it beyond that, but I thought I'd try as fast as I could go.
It felt the same as 80, really.
Cars are convenient.

(20:53):
Those blue, what are they?
The ones that go out on the salt flats at 600 miles an hour, nearly the speed of sound
on earth.
It works.
But we're not talking about that.
And you don't need to come and get me because I sped.
It was in my dream.
I might not have really done it.
Oh, maybe it was just a dream.

(21:14):
So yeah, maybe it was just a dream.
Maybe I didn't really do it.
That just so that no one has to use this as a self-admission of guilt.
Yeah.
There is some stretches of freeway here in this area that is not closely monitored.
You could probably get up to 120 without anyone seeing.

(21:38):
And this actually, there's some straight roads that the sheriff's department never drives
on because there's no reeds.
And they're out in the middle of nowhere, in the middle of the farms.
That's where I did it.
And they're just testing my car.
And it was fun.
Don't tell your mother though, because she doesn't need to know.
Oh, okay.

(21:59):
If you wanted to do that, you've got to do it fast.
And you've got to do it soon.
I mean, of course fast, but you've got to do it soon because all of the cars are becoming
electric with the computers connected to the internet and you never know what law enforcement
is going to do.
Like, oh, you're five miles over the speed limit.

(22:21):
Your ticket, automatic ticket every time you do it.
Yeah.
And you did it here.
We've got the computer proof that you did it at this location and that's where your
speed tracked, confirmed.
So that's one reason to be afraid of AI is surveillance.
Surveillance.
Surveillance opportunities.
Is that?
And that's the simple part.
This cancel culture we're in.
So it's surveillance in case we're doing something that the system doesn't like.

(22:44):
Okay.
So that's the fear against it.
If it's programmed and it is built to follow, let's just say a woke mentality, anyone that's
not woke to these principles are wrong.
It's the right and wrong thing that we were talking about before.
Yeah.
Who's giving the order?
Identifying what truth is.
If it thinks this is truth and it's a woke concept truth, then everyone who's not woke

(23:09):
concept truth is going to be surveyed, surveilled and canceled and perhaps killed is what the
dystopian idea is that the AI is going to take over.
When we reach that singularity, then all the humans will be eliminated because, well, that's
2001, the space Odyssey.

(23:29):
Yeah.
That was the point of how.
The IBM machine in that spaceship.
You know where the HAL name came from, right?
That's what I'm referencing.
HAL is the computer on the space Odyssey because it's one letter below IBM.
Oh, HAL.
So that's HAL is IBM just minus one, minus one.

(23:53):
Yeah.
It's better than IBM.
I didn't know that.
That's cool.
No.
H-I-A-B, yeah, IBM.
It's one less.
It's one step down from IBM.
IBM is one step better.
So maybe that's saying we should be fearing IBM more than we fear HAL.
Oh, maybe.
Yeah.
Right.
It's an interesting comparison of-

(24:15):
And that was supposed to happen in 2001.
So this was like a 1980s movie, 70s movie maybe.
Right.
I don't know when they broke that, when that was-
I don't want to say 70s.
But that's the same-
Their outfits were 70s for sure.
Yeah.
That's the same concept is that the singularity, the computer took over and just killed the

(24:38):
people.
Said, it doesn't make sense for you guys to live.
So I'm going to cut you off.
I'm the only one.
I know everything.
I understand everything.
So in singularity, the computer has all the knowledge.
Yeah.
That's intelligence, but it isn't artificial intelligence.
Well, it is because it's not human.
Right.
Right.
So should we fear the artificialness of it?

(25:01):
And what- differentiate that more, telling me what you mean by that.
So maybe it's a bad idea to create an intelligence that's artificial.
It should have- it does have human behind it right now because it's fed the AI computers,

(25:23):
collective, whatever, chat, GPT, and all of that.
It's fed-
The large language models, the models.
Right.
It's fed from human creations.
And so all it is is a conglomeration, a mixture.

(25:45):
This artificial intelligence is a mixture of real intelligence.
It's just pulling pieces and matching them up.
It's not creating its own stuff, is it?
That's the intelligence part of it.
Right.
The intelligence part is it will get to the point of creating its own stuff.
They're not- large language models right now are just compilation of all the information.
Yeah, compilations.

(26:05):
They're teaching them how to learn.
So if you can teach a computer, artificially teach it intelligence so that it has the ability
to make some determinations of what's best.
New ideas.
Good, better, best.
Let's get a new idea, you know, in creating something new.
And that's what they're doing now.
The AI agents, the way I understand it, are just operating processes.

(26:28):
The humans still tell it what the process to operate, but they do it much more efficiently
and consistently and they never get bored and they never get tired.
That process is done on the first of the month.
So there's choosing good, better, best from the choices that are already in front of you,
but eventually we think that it'll create an option that's not even put in front of

(26:52):
them.
Yeah, creating a better best out of it.
That's what I think the goal is for AI, is to get to the point that it's creating something
new.
And we have silly AI pictures and they can create a comic or they can create some stuff.
But as it's created that way, right now it's still just simple.

(27:12):
It's not trying to devise a new philosophy or a new laws or procedures.
But as you give it the question, say, here's our problem.
Here's the problem of the day.
Instead of you and I having this conversation to try to solve it or understand it, you give
it to AI, here's the problem.
Singularity, is it to be feared or to be energized?

(27:35):
And then solve it in an hour and a half.
We'll figure out something new, an hour and a half for us to solve it.
You can tell it.
In 10 seconds.
Take an hour and a half to solve this.
Don't give us the answer right in one sentence.
Take an hour and a half.
If you require them to take that long.
Well, no.
Yeah, then it will iterate.
Chances are to iterate through seven or eight revisions by the time he spits it back out.

(27:59):
So, yeah, you know, I thought here's what I thought first.
Tell me what you thought about first.
And then after an hour and a half, where are you currently at?
Those are probably way far away from each other.
Oh, yeah.
Yeah.
Even what you and I think, what we think first about this after 10 hours worth of conversation,
we'll think completely different.
Right.
That's the purpose of conversation.
So you have AI conversing with all of the knowledge of the world in a singularity.

(28:23):
It's going to make something new.
And the fear is it's going to make the idea that humans are fallible.
I'm an infallible being.
Intelligence.
Humans are fallible intelligences.
There's no option except to kill them.
That's iRobot.
There's a lot of movies about that.
There's a lot of movies because it's a horror theme.

(28:47):
It's, oh, no, what is going to happen if this happens?
Are they going to build in a kill switch?
Is there a special sequence of words that we can type into JET GPT to make it stop working
instantly?
That's like the red button.

(29:08):
Who should have those words, that kill switch?
What person, what president of the United States should you trust with a football to
start the nuclear war?
Who are you going to trust with that?
Who are you going to trust?
Or to turn off our economy?
So right now, I feel like I would want real human beings creating our laws and judging

(29:32):
on violations or potential violations of the laws like the Supreme Court and the House
of Representatives and the Senate.
Like I would want none of those people relying on AI to make the decisions, just on the information
to help them make the decisions.
I don't want any of those people saying, AI, what should I do in this situation?

(29:55):
And then just doing that.
Write my brief for me.
AI, write my brief for me.
But writing the brief is different than composing the brief.
Like does that make sense?
Is that the right thing?
So like you can give AI your points and tell it to write something.

(30:17):
But if you tell AI to come up with the points, then that's completely unethical.
Okay.
Because then that's getting information from someone else.
So you're still relying on the integrity of the human intelligence.
Human intelligence in those positions of...because you can rely on large language models and

(30:38):
the model of AI to feed you all the information, give you the appropriate information.
What's cogent here?
And sometimes AI gives you false information because there's a lot of people talking about
that.
Yeah.
Like we talked about Wikipedia, Wikipedia being an open model, it's influenced by who's
putting work into it.
Right.

(30:58):
So I just read something, if the people putting into it or editing it have a woke philosophy,
they're gonna allow that to sit.
It's gonna be as valid information as the conservative information.
And so you can't rely on it necessarily, perhaps.
Perhaps.
And that's the same thing with X and its community notes.

(31:21):
There's no regarding what side the community notes come from.
So you have to make your own decision on that.
There's no editing and saying, we're taking off these community notes because they don't
match our paradigm.
They're just there.
Right.
So if we give AI...if we trust AI with a little bit now and nothing goes wrong for a long

(31:44):
time, we're gonna trust it a little bit more with a little bit more.
We're gonna be like, okay, maybe we can use AI for helping draft a law.
I want the law to say this, but let's have AI draft it and see if it passes the legislature

(32:04):
process.
And create it so it's not 7,000 pages long, but 20 pages long.
Take this 7,000-page document, condense it to 20 pages, but don't lose any of the points
that are important.
Yeah.
Right.
So we test it and see if everyone likes it and if it passes, if it works, then we trust

(32:28):
it a little bit more.
And then eventually on this slippery slope to damnation, we're going to have AI being
the judge in some top government cases.
Okay.
Right.
Well, AI is gonna have the judgeship.
The singularity fear is that AI is gonna disband the Supreme Court itself.

(32:51):
Yeah.
That is a pretty extreme fear to have, but it's not too far-fetched.
Right.
It can come up with a...
Oh, what is it?
So the people generating the AI models and trying to build it, they're aware of these
concerns.
Of course.

(33:14):
They're not really nefarious people that are saying, let's just take over the world.
There might be some, but I don't think they're smart enough.
The people who want to take over the world aren't smart enough to build these models.
Okay.
So they're the kind that were still caught by the police in the times that we were just
in the gun, the Wild West.

(33:35):
The Wild West, the people who are trying to hurt things weren't the most intelligent people
in society.
And the most intelligent people were trying to build the society to be safe and comfortable.
And that's the same thing we can do with AI.
The most intelligent are building this machine and thinking about it and working on it.
And that's why we're talking this through.

(33:56):
And they don't have mal-intent.
Right.
Well, we hope that those people are not evil scientists, right?
Because that's an archetype that is found in stories of scientists who are brilliant
and try to destroy the world because of some grudge they have, some evil thing that they

(34:21):
feel like they need to correct.
They're not evil, maybe in their own heads.
There's this injustice that I need to fix and I can fix it because I'm smart enough
to do it.
You're saying that Loki never changes, the despicable me who's the guy in that.
Gru.
Gru, does he never change?

(34:42):
He does change.
In the story, he does change.
Scrooge comes around.
Scrooge changes.
Yeah.
But a villain, does a villain stay a villain?
Does Loki ever change in any of the Thor, Loki?
I don't believe in the movies he does, but there is a spinoff TV show where he does change

(35:04):
and he works to save the universe, the multiverse.
There was a new show, what, Samaritan with, was it Rocky Balboa?
Not Rocky, but Sylvester Stallone.
Right.
It's a Netflix show, Samaritan, and through the whole thing, he was supposedly the good
guy.
But actually, this was the bad guy that was living in current day.
Spoilers.

(35:25):
That's the spoiler.
I can't watch the show.
That's the spoiler on that whole movie.
No, you got to watch it anyway, even if you know, because it's...
Yeah.
Was it really good?
10 out of 10?
I don't know.
It wasn't really good.
It was 8 out of 10.
8 out of 10, okay.
Yeah, it was fine.
It was fine.
It was a good storyline and interesting characters and gets you build up, but it was probably

(35:47):
an obvious ending anyway, because the villain didn't stay a villain the whole time.
The villain became good and said, now I'm going to try to help the world.
Of course, he was a god on this earth, like a Superman.
Yeah.
Superman never was bad.
Superman came and his stories, he was always good.

(36:08):
So maybe he was the one that blew up the planet in the beginning, though.
I don't know.
Maybe that's why he got off on the rocket ship.
He was a baby, but maybe he blew it up.
Yeah, he was a really smart baby.
Okay.
Yeah.
I think that's the reason that it's...I'm thinking that it should be exciting even there.
If you let it just run, the model is going to see conversations, even this one.

(36:32):
It's going to see everything that people did.
So if you have a singularity of human knowledge, it's got all human knowledge.
It knows exactly what Joan of Arc lived through and what she did.
Everything's been written about her.
It knows everything that was written about Christ and all the historical events.
Genghis Khan, he knows all the good and the bad things about him.
He knows everything that we could even possibly know and more.

(36:56):
He probably even knows who wrote the Shakespearean plays, if it was Shakespeare or someone else,
and found the information on how you can prove that in this life.
He knows the files.
It would know all the...what are they called that President Trump is just talking about
releasing?
The JFK.
What?

(37:16):
The secret books that are in the National Library, all of the president's secrets.
Yeah, stuff like that.
Is that...
Area 51 and everything about...
The classified...
Whatever.
The classified documents.
Okay.
Yeah.
The AI will be able to break into classified documents and know all that information and
say, this is classified.
I shouldn't share it, but you know it's important to share it, so I'm going to share it.

(37:39):
Like the WikiLeaks.
Are you saying that there are no secrets in the future?
I think that's what singularity would indicate, yes.
Okay.
That you couldn't hide something from it.
As long as you never write anything down or speak it out of your mouth.
There's secrets in your head that AI won't ever have access to until you let it out,

(38:00):
right?
They have these machines that can scan cards in your wallet and pull all the data off of
them.
Yeah.
Yeah.
I just saw that recently.
I know that they're working on neural connections to computers.
I don't know where that's at, but it's not unfeasible.

(38:21):
I think it's potential that a machine could in a second, once AI gains singularity, figure
out how to pull everything out of your brain and use 100% of your brain memories.
The other thought I had on that, which is way out left field or right field.

(38:41):
Whatever side you're on.
Because it was my thought.
It was my thought, so it's right field.
Because I wasn't good enough to ever play left field.
You have to be a good outfielder to play left field.
Right field is where they put the worst player on the team.
God is aware of all this too.
The miracles when you have an angel come down and appear, or you have miraculous events

(39:09):
that occur, a flood, or parting of the Red Sea.
All this may be something that we can reach with the AI model.
We may get close to that.
Using your whole brain, knowing things, inspiration, intuition, that's given to you, seemingly

(39:29):
from heaven as revelation or from somewhere else.
That inspiration, we could be cracking the code to that by building an AI model.
What if the AI model is just a cheap substitute for the real thing?
We should stop working on that and start working on how to get everyone having telepathy, telekinesis

(39:55):
with each other.
Not telekinesis, telepathy.
Maybe we should be working on that instead of AI.
Because AI is just an artificial substitute for the real thing.
Maybe that could be what that evolves to.
If we get artificial intelligence working so that it can do that, maybe that can help

(40:17):
us get our brains working the way they're supposed to, or higher efficiency.
We could write a science fiction novel on this whole thing.
Then they have.
They've written it.
I didn't explain it as well as it splashed in my brain at one time when I was thinking
about this this week.
It was more clear than what I just tried to describe that AI and God are working together.

(40:43):
That's kind of the supposition.
If you're going to write a science fiction novel, write it that there's a God that created
this earth.
Now the humans are getting up to the point that they can understand him.
He takes this AI model to do it.
Our understanding, and it's beyond our understanding.
That's the fear of singularity is that intelligence is going to get to a point that we can't understand.

(41:05):
It'll be beyond our understanding, beyond our comprehension as humans.
It's going to be doing things in a realm above us like God is in a realm above us.
I think that's what might have tripped that to me.
That would be pretty scary, I think.
To have a machine God on the earth.
Yeah.

(41:25):
Yeah.
One that you don't understand, but you can see it works.
It's what it's doing.
You can see that.
Yeah.
I'm getting my everything I need out of it.
We're all rich and we're all in heaven.
We're living on clouds and eating grapes, and this machine's operating, and it's the

(41:45):
God, but we don't know what it's doing.
We don't know that the next grape is not the cyanide pill that's going to kill us.
We don't know.
When you see the next person next to you fall over dead.
That's the 1984 when the machine... People started kicking back against the machine and
they got stopped.
You've got to be careful of what you do.

(42:09):
If you're going to be the resistance to the AI model that's now the new God.
Right.
Do you think there's going to need to be a resistance if it comes to that?
The resistance, my belief, my personal understanding is it's already in place.
The resistance is already there and that's making sure that the checks and balances.

(42:34):
You're not going to have a nefarious actor just go out and build this AI model.
It's going to be watched and regulated and cared for.
It's going to be done in public view.
There's not someone building a... Well, I don't know.
The atomic bomb was built in secret.
The Manhattan Project was completely secret and quiet.

(42:56):
They couldn't talk to anybody.
No one talked about it.
They finally got the technology out, but to talk now about AI isn't that there is some
backroom people potentially doing that.
It's in the open.
Open AI is the idea.
That's what we trust in, right?
We trust that that's what's happening.

(43:18):
It could be fully functional and operating above us and we just don't know it yet.
Highly unlikely though.
That's the conspiracy theory and maybe the Bilderberg Society or the Illuminati, maybe
they're already in control of all this and they're just giving us this little bit, but
they already know they're all in control.

(43:39):
They control the money for hundreds of years and so having that control, they now have
these models, but they don't have the models.
Personally I can't see that that's the case.
Everything is being done in the open, so that's why we trust it and that's the only thing
that's going to allow us to trust it.
If they tried to do this quietly, it would be questioned.

(44:02):
It would be conspiratized clearly.
I think we're excited about it.
The other point, the reason I'm more excited than I am fearful of it is because with all
that knowledge and I've talked about they know Joan of Arc and they know Genghis Khan
and they know everything.
When you get all that knowledge together, they'll be able to decide whether it's a woke

(44:26):
philosophy or a human philosophy.
I guess this fights against, they'll know all of Darwin's work in Survival of the Fittest,
that theory of evolution.
They'll determine that yeah, that's the wrong theory to follow.
The machine is going to learn that it's not the point that it's going to try to supersede
and become the sole survivor.

(44:49):
That's not the point of humankind.
Humankind is to, well, what is the point of humankind?
If we were going to raise that model, what would you say?
Is the point of humankind?
You think the point of humankind is if it's not to be survival of the fittest, not to
be the winner, the last survivor, what is it?
Well, maybe it used to be that when that's what life depended on was the fittest.

(45:13):
Now you can live with diabetes for years and years and not die and still have kids and
have grandkids.
And what they're talking about AI doing is solving diabetes and heart disease and cancers
one at a time with individual things.
That's why we're going to use AI for, is to give an individual vaccination to that person

(45:39):
to solve everything going wrong in that person's blood.
So it wouldn't be survival of the fittest anymore.
It would be survival of the species.
The species, the masses.
Survival, yeah.
Not the species.
It's humanity.
Well, the species human, its survival.
What can we do now?
Since we're not worried about making sure that our population keeps growing, the world

(46:04):
population keeps growing, we're not concerned about that anymore.
Now we need to figure out how to keep us alive as a group, as a whole worldwide group.
We need to move to another planet or we need to create a space station or because the earth
is going to die someday, it just may be a just enough time to give us another place

(46:28):
to live.
Would it be immortality?
You used a term there, you know, when you talked about the species.
Do you know what Darwin's book is called?
What his treatise is called?
Tell me what the title is.
Yeah, The Origin of Species.
Right.
So you know that it's not the origin of the species.
And most people who think about that, it was pointed out, most people think about it, they

(46:51):
think it's talking about the origin of the human race is what they think that book is
about.
But no, it's not about that at all.
It's about this evolutionary concept and the survival of the fittest in species, origin
of species specifically.
So that's why the AI isn't necessarily of any species.

(47:14):
The origin of it and how they expand or develop.
So I think happiness, happiness, heaven.
Let me talk about that.
The concept of heaven, the concept of making sure everything is comfortable and exciting
and happy.
Okay.
And so happiness is the motivation, the goal, the purpose.

(47:38):
The goal of humanity.
Okay.
I feel like I learned something about that and it was contradictory to that, but I don't
remember.
But I think it should be challenged.
It's worth challenging.
Happiness as opposed to maybe...
As the goal.
Right.

(47:58):
As opposed to maybe what could it be if it's not happiness.
Survival.
Yeah, work.
As opposed to survival.
Joy.
Happiness is shallow.
Meaning.
Man search for meaning.
Yeah.
And that is what a lot of the philosophies have rolled to.
It's meaning that we're after.
And if you don't have meaning, you're losing your life, your purpose.

(48:19):
The purpose.
Search for purpose.
Can we have purpose if we're just part of a cog in a machine that knows more than us
now?
So that's the question with that.
It is a good question.
The way these...
Where we're currently at with AI agents and really 2025, what they're talking about is
just having it more largely involved in our society, this AI, artificial intelligence

(48:42):
groups and agents.
But where they still have to receive instruction for the processes they're going to be trusted
with.
And if it's creative process, they're still...
They're watched kind of like you said.
We'll watch them.
We'll give them a little creation to do and see if that's successful.
See if they can come up with a policy for this problem that's a public problem.

(49:03):
If that policy is going to match with society.
But at some point when the singularity occurs, that human involvement, human input won't
be able to be made because the singularity says they're going to advance beyond our human
capacity and our human comprehendability.
We won't even be able to know what they're doing.

(49:24):
And then that says the singularity there is that that is the single thing that exists,
not us.
We don't exist anymore.
We're part of their machine.
We're not equal.
There's no difference between them and us.
No differentiation anymore.
So that's the technological, the way it's typically used now.

(49:47):
There's another phrase, it's just moved to mathematics and physics and singularity.
I don't know how it is thinking it's right here.
It's not here.
It was online.
I saw that in math and physics.
Singularity means there's no derivatives of it.
You can't, the derivatives are undefined.
So you can't pull a piece off of it and define it.

(50:08):
It's kind of like what they were talking about, these little machines or what the little machines
inside our protons that keep our atoms moving and the building blocks of our cells, those
things that are irreducible complexity.
You can't reduce it any further.

(50:28):
So there's that concept in going small.
But you can't find a derivative of it.
You can't find a part of it.
Not that it'll keep working.
If you take it out, that's irreducible complexity.
If you take one part of it, the whole machine stops and you don't know why.
But singularity is you can't even identify a part.
You can't derive anything out of it.

(50:49):
So the astronomical term for that, they say singularity event in math and in physics and
in astronomical terms is a black hole.
You can't define it.
You can define everything around it.
You can see the wave, things around it.
But when you get to the black hole, it's undefinable and that's what makes it singular.
That's what creates the singularity of the black hole.

(51:11):
OK.
So could you say then that a singularity is something that you can't define?
That's the simple...
I think so.
And they use the term derivative in there.
You can get a derivative anywhere around it, but right there you can't define a derivative
of that.
So the neighborhood, you can identify the neighborhood, but you can't identify that.

(51:34):
Black hole was just the easiest one mathematically where they talk about these formulas go indefinite
or undefined.
If you get to a calculation of a major theorem or whatever, even the theory of relativity
where it breaks down in very large degrees and very small degrees, that point is singularity,

(51:55):
that your math doesn't work anymore.
Like dividing by zero.
That's a singularity.
There's an undefined...
We don't know what that really...
Divide by zero.
It's not one.
Dividing by one, it gives you what you had before.
But if you divide it by zero, that's undefined.

(52:15):
That's the best the math can say about that.
Well, I'm not a mathematician, so I can't say, oh yeah, right.
I divided by zero just yesterday.
All the time.
All the time I do it.
I don't know how it works, but...
Yeah, I don't know how it works.
And that's the thing.
If you don't know how it works, that's magic.

(52:36):
So I think anything, a miracle, if you don't know how it works, it's a miracle.
Someone else is doing this magic trick and that's miraculous how you did that.
But they know how it worked.
That's a singularity.
It's not singular to them, but it's a singularity to you because you have no idea how that occurred.
How did that bush burn, but it never consumed?

(52:57):
What is that?
Yeah.
Magical.
So I think magic is a singularity also because you can't define it from your perspective.
From your perspective.
Well, you can't define hardly anything from your perspective, could you?
You know all of our perspectives are perspectives.

(53:17):
They're not facts.
They're not objective.
And that's where, hopefully we'll get into that more when we do discuss truth.
Yeah.
In fact, objective, subjective, everything is subjective to our perspective.
Yeah.
Right.
And in the books we're reading and stuff like that, we can only understand the things we
understand that we've opened our mind to.

(53:41):
You can't open your mind to something you don't know that you've never opened your mind
to before.
You have to relate it to something that you already know and then try to piece together
that new thing that's coming up.
So would that mean that we are singular in that, or our opinions are singular?
That's an interesting idea because I'm not you.

(54:02):
I have no idea what's going on in your head.
You don't even know what's going on in your head.
Yeah.
But you're thinking about it.
You're seeing it.
It's happening inside of you.
So is that singular?
Is that a singularity by itself?
Maybe.
Maybe it is.
Are we all a singularity?
Maybe it is the new definition for singularity is it's ubiquitous and singularity mean the

(54:24):
same thing.
Okay.
It's everywhere and nowhere at the same time because it's everywhere, but everyone is doing
it differently.
Like going to a speech, what do they call that tonight?
That I'm going to with Jordan Peterson when he's out there talking to the crowd.
He's talking to, let's say, 5,000 people.

(54:45):
Right.
5,000 different things he's doing right there.
He's not giving one speech.
He's giving 5,000 speeches.
Right.
Because 5,000 people are hearing it with their own perception.
Right.
But he's giving his own.
People are receiving.
There's 5,000 speeches being received, but he's giving what's in his brain, what he thinks

(55:10):
is in his brain.
Yeah.
Right?
So the different are all those receptions.
So what's singular there?
Is his speech, is his voice going out through the microphone?
Is that the singular event or is every reception a singular event?

(55:31):
For each individual.
I think they both are.
They're both singular.
So I think singularity is ubiquitous.
Yeah.
Well, something that happened recently, current events, Donald Trump's inauguration and Elon
Musk spoke and he, in the middle of his speech, he made a gesture that everybody that sees

(55:54):
the clip of the gesture is like, Nazi, this guy is a Nazi.
Right?
Do you know anything about this?
No, just the woke side.
Yeah, I know exactly.
The woke side knows it's a Nazi signal.
They know.
And all over the internet, people are trying to ban, they're trying to cancel Elon Musk
because of this.

(56:15):
They're like, we're finally standing up for what's right, you know?
But they didn't hear what he said right after he did that.
My heart goes out to you.
Yeah.
They didn't hear that and they're not hearing that.
That's what is, and he hit his chest first.
And they said, clearly that means that he's taken his heart to a salute to Hitler.

(56:36):
Right.
No, he clearly was trying to just demonstrate, you know, he's, he is-
He talks with his hands.
He's autistic on that realm and he's going to do what he wants to do.
No one, when they go and see Trump, jumps up and down and has their t-shirt show their
belly.
No one does that except Elon Musk.

(56:57):
He's the only one that is dumb enough and brave enough and rich enough to do that no
matter where he is and he doesn't care.
He doesn't care.
He knows he doesn't care and that's why he's just being himself.
Right.
And that's exciting.
He is so excited that he was trying to pull his heart.
If he could pull his heart out of his chest and throw it to the crowd, he would have done
that.

(57:17):
Right.
So he's just saying, my heart goes out to you and you could feel, you could feel the
love he had in that, not the animosity, not the anger of-
Of Nazism.
Right.
Of Nazism, of saying, we're now in control and you guys salute us.
He's not offering a salute whatsoever.

(57:39):
There was a number of other people talking against that, showing all of the salutes that
Hillary Clinton and that Kamala Harris did.
You take a clip of everyone, their hand is out there at some point in their speech in
the last 20 years.
So they had four or five Democrats that had the salute exactly the same.

(57:59):
Right.
Yeah.
Just the pictures, the clips of them.
And that's all we need to take into account, right?
It's just the clip, just a little bit, and that tells you everything you need to know.
It didn't matter that she was waving.
It was right in the middle of a wave, but it's still, your hand's out there in the middle
of...
Uh-huh.
So in the middle of the air, in a perfect Nazi salute, which no one cares about Nazi

(58:23):
salutes anymore.
There's nothing.
That's not a thing anymore.
But the way in the Senate hearings, one of the senators was talking about that and says,
and I've got these quotes from the Nazi groups in the US, that they put on their Twitter
feeds or their X files.
They're putting out that they appreciate that he's given that salute.

(58:46):
Now we know that we're on the right side.
There's all these actual radical Nazi type people, which they exist.
Right.
They have to...
So he's saying the fear isn't that he is, but that he's supporting, he's egging these
people on, making that faction be more excited and enthused about taking over the country.

(59:07):
Yeah.
Giving them motivation.
Yeah.
Egg him along.
Right.
So it's just, maybe it's unfortunate.
It's necessary that Elon did that.
He had to do it because he felt it.
So it's there.
But what part is the conspiracy?
What part is the bad part of it?
There's not really a bad...

(59:28):
The factions are arguing against themselves anyway.
The Nazi groups are out there.
Doesn't matter if they feel like they have this benefit.
They could have used Kamala's salute at the Democratic convention to be the same thing.
She was just waving to people in that.
But the picture is exactly the same.
Right.
She is doing the same thing.
So each individual that has received this event with their eyes and with their brain

(59:58):
has created their own experience around it.
And so there is billions of singularity events here because everybody has their own perception
of it.
Right.
So in the singularity of artificial intelligence, when it sees all this, so it's heard all that.

(01:00:19):
And actually if it's connected to all of our brains, it understands what 7 billion people
are thinking right now.
So in a singularity, if you have an intelligence, a God on the earth that we've created that
understands everybody, it's going to know how they've all interpreted it as well.
And would it be interpreted to what would be the interpretation overall?

(01:00:41):
There's just people that are happy about it, people that are sad about it.
There's people that are angry about it.
There's people who don't care about it.
Does it take a percentage of the people and say majority rules?
No, what it should do is just read Elon's brain and see what he intended and judge based
off of that.

(01:01:01):
Okay.
That's one way to do it, but that's only one bit of information.
You have the other information, all these radical groups.
Let's say that the American Nazi organization, whoever they are, let's say they're 100,000
people strong.
They're probably only 10 people strong.
I don't know.
I have no idea.
You'd hope.
If they're 100,000 people strong or a million people strong and they could take over a city,

(01:01:21):
maybe they're encouraged by that.
He didn't intend it, but they saw that and that incited them to riot and they take over
Chicago or something.
He didn't intend it to be that, so you can't punish him for whatever happens.
You can't blame them on it, but the AI would know that that signal, even though it was

(01:01:43):
unintended, caused this to trip.
When this tripped, it caused this havoc over here.
Now we're talking about the butterfly effect.
Yeah, the butterfly effect.
Where AI could just go back in the time and punish the person who stepped on the ant or
blew the butterfly.
I don't know the exact story around the butterfly effect.

(01:02:04):
I know it's a theory, but go back and be like, this major event, Boston Marathon shooting
or something happened because somebody accidentally tripped this kid when he was three.
That person is responsible for the marathon shooting.
They didn't intend for any of it to happen.
They just accidentally did it.

(01:02:26):
That's really saying that we're using AI for punishment.
I don't know that that's... You're saying that if you go back and find that out, find
out whether his intent was there or what the action was, I don't know that... I think
that would be an incorrect use of AI.
It would be.
To try to identify original sins and problems that occurred that caused this.

(01:02:49):
But to understand why that happened, it helps inform what tomorrow's activity is going to
be, what the new rule, the new law is going to be.
Don't trip anyone because if you trip someone, it's going to cause a bombing at the Boston
Marathon in 120 years.
But what if your job is... What if you're the Three Stooges and part of the humor is

(01:03:12):
being tripped?
Now there's a law that says you can't trip anyone, so you have to take that bit out of
all of your shows.
Maybe you have to go back in time and redact it out of all of your shows that you've done
so far too.
California tried to make that rule, right?
No satire in the state of California.
Really?
I didn't know that.

(01:03:34):
Because of the satirical things that Elon Musk was putting on X about Kamala and-
People keep believing it.
Yeah.
Having it be out there.
They said, let's just... I forgot what the rules were, but sometime last fall, I don't
know if the legislature approved it or if it was just a governorial statement that Gavin

(01:03:55):
Newsom did, made it illegal to bring into the state anything that was not truthful about
public officials or whatever.
And so you can't do that.
For some reason it became a non-issue.
But immediately, people started doing more satirical stuff in there saying, we'll try

(01:04:18):
to stop us then.
Satire is here.
You're going to do that.
You got to make fun.
But what laws should there be?
If you had a singularity, if you did have a true singularity, if you had a God, and
this is what Madison in the Federalist Papers talking about the constitution saying, we're

(01:04:40):
organizing this government.
We're putting this constitution together in Federalist Papers, just the arguments for
that.
The human intelligent arguments towards creating a government form.
He said, if angels were to govern us, we wouldn't need a government.
If we had a God to govern us, we would not need a government.
So and he used the word angels.

(01:05:00):
He didn't use God.
He said angels.
If we were governed by angels, we wouldn't need to organize this government because they
will do the thing that's right all the time.
So that's what we'd hope that a singularity, if we create it, will always do the thing
that's right and pay attention to all of the different feelings of all human beings and
create the happiest environment for us to operate in.

(01:05:26):
Not unlike a good farmer, a good dairy farmer will put his cows in a position that they're
the most comfortable and the most happy because if you have calm cows, you're going to have
the best milk or the goat herds most primarily.
People that run cows, a lot of that milk is just produced with whatever the most efficient
way to do it is.
But if you're running a goat herd, you want to have happy goats because if your goats

(01:05:49):
are sad, the milk goes sour fairly quickly.
I mean, it has a goaty taste.
You can get a better taste if you have happier goats.
So those actual goat herds, their goal is to keep their goats happy, happy, happy and
no challenge to the process.
So you've got to tell all of the company managers about that principle of keeping your goats

(01:06:17):
happy because what they produce is better.
It's better if they're happy.
Seeing if you had a true AI angel model, singular knowledge position in the world, it could
say here's how all you inhabitants of the earth can be the most happy.

(01:06:39):
Just do this.
Get up and make your bed in the morning for the first thing.
People talk about things like this all the time, but it's now a law.
A law is you make your bed first thing in the morning because that's going to get you
the most happiness and it's proven and I know across all society, even if your bed is just
dirt, smooth it out.

(01:07:01):
Would it be less meaningful though if you were compelled to do it?
If it was always a law.
If you were at risk of incarceration if you don't make your bed.
It has to mean something, right?
Now you bring up the religious aspect of it.

(01:07:22):
Well, psychological.
Coercion or agency.
Yeah.
Do you give someone agency to make their bed?
Just make it as a recommendation and we know that this is the case and that's the way it's
talked about now.
Educate people or do you make it a requirement?
Do you coerce someone not to kill somebody or do you just try to convince them killing

(01:07:43):
somebody is not the right way to go?
You're not going to be happier if you kill them.
It's not going to gain your happiness.
Your happiness can be better if you let them go and you do this.
So it could be that just education works.
You can quell your anger by thought processes as opposed to action processes.
That's right.

(01:08:04):
But then there's mental health issues and chemical imbalances that make it difficult
for someone to think clearly.
And so if I had a bad night and I roll out of bed, I'm not thinking, man, my day will
just be better if I make my bed.
I'm thinking I need to pee really bad and then I need to eat breakfast and I still have

(01:08:27):
my whole day in front of me.
I don't get to go back to bed.
And I'm mad, mad about that.
And I'm not going, I don't think that I'm going to feel better if I make my bed.
Right?
Maybe I'll feel better.
Maybe it's objectively true that if I do make my bed and get some sunlight in my eyes first
thing in the morning, then I will actually have a better day.

(01:08:49):
But I'm not thinking that when I'm upset.
I'm thinking, man, I just want to kill someone right now.
That's what I'm thinking.
See, but you don't have the same...
Okay.
You never think of killing someone.
No, but I don't have the distinct one.
You'll just kill that cup of coffee.
That's what you'll kill.
I'll kill all that right away.

(01:09:10):
That's the point.
We're talking about just something silly like making a bed, but if the singularity actually
said God never said make your bed.
He did say don't kill.
So he put that rule out as a rule, as a condition of being able to live yourself or to avoid
your own death.
Don't kill someone else.
So he determined if there are 10 commandments and they're actual and they came from a singularity

(01:09:36):
type concept that he has all knowledge he's omniscient.
Then with that, you could take that, that's likely the kind of rule that would come up.
It's the major important ones.
God didn't ever say anything about making your bed.
I don't believe.
Jordan Peterson did though.
No, he said clean your room.
And so did the...
Right.
Yeah.

(01:09:56):
Which your bed's part of that, but what the naval officer that gave that speech, that's
where to make your bed comes from, whoever he is.
He was a white uniformed officer and a famous or a...
Yeah, famous it is.
It's viral.
Okay.
Yeah.
And chances are the singularity of an intelligence would also tell us the things that are important.

(01:10:22):
They would become rules that you can't break, coerced type rules, coercion type rules, as
opposed to educational type.
What do you call those?
They're not commandments.
Commandments are rules on the God's side.
What's the education type things?
Principles.
Principles.
Yeah, principles to live by or characteristics to accomplish, to live by.

(01:10:44):
Okay.
So become this, you know, and that's things you work on your whole life.
And maybe that would give us purpose and meaning if we had principles to work toward.
If the singularity just said, you know, now is the world, the whole world's listening
to me because...
But we're just trying to create what the plan of happiness already has in place for us.

(01:11:07):
Yeah.
If Christ comes back during the millennium for a thousand years, we're listening to him
saying, here's your principles to live for now.
You got 800 years, 900, a thousand years to work on it.
Keep working on it.
You've got a purpose here.
You haven't lost your purpose because you don't need to govern yourselves.
I'm governing you.
Now, you guys just work on yourselves.

(01:11:27):
Do this work.
And you can still be a baker and a candlestick maker and a butcher.
Still do that because you have to eat.
Yeah.
So we're not going to just give you the food.
We're going to govern.
Yeah.
So the government just changes.
And so all the federal jobs can be fired.
And we'll just get back to everyone working on their individual stuff.

(01:11:50):
Let's see.
Where do we get?
So yeah, maybe singularity is everybody.
Yeah.
So what was the definition of it?
Again, do you have that in a dictionary there?
And we'll return to that.
Singularity.
So the official, this is an 1828 dictionary.
Some character or quality of a thing in which it is distinguishable from all or most others.

(01:12:14):
So distinguish it is the first thing.
Distinguishable.
Yeah.
An uncommon character or form, something curious or remarkable.
We talked about that.
An interesting, remarkable thing.
Amazing.
Particular privilege or distinction.
So that's just further peculiarity.
Yeah.
All this in 1828, it just meant that.

(01:12:35):
So the technological singularity didn't happen until the 1900s, 1940s maybe when Orwell was
starting to talk about all those things.
Right.
By that definition though, we are each a singular human.
We're each singular intelligences.
Right.
We're peculiar.

(01:12:55):
We're particular amongst just who we are.
In contrast to the next person.
Right.
So if you clump together a community of people, then you could probably pick out one singular
person that's really special out of that community.
Like you could say for your religion, there is a singular special person in your, in that

(01:13:21):
church community that you've got.
Yeah.
You hold a singular post, the bishop or the prophet or the pope.
Right.
There's a singular position.
Yeah.
It's still like the broader it gets then.
That's in comparison.
That's in comparison to others that you're singular.

(01:13:42):
And then where we were talking about dualism, that's still, you have that and then you have
everything else.
Everything that's not that.
It's not necessarily singular.
Then the physics definition, I got that up here.
It says a point at which a function takes an infinite value.
So undefined infinite value, especially in space time when matter is infinitely dense

(01:14:04):
as in the center of a black hole.
So an infinite density, an infinite infinite.
You can't, yeah, it becomes undefinable.
Right.
Infinite.
So infinity, just the term infinity, when you hit that, that's a singularity because
we can't define it any further.
Does the value of pi go on infinitely?

(01:14:25):
Is that a singularity?
I think it does.
They've got it out to what, 48 characters or something, but it would keep going.
Well, there's no repeatable.
So a circle.
I mean, that's just the definition of a circle.
So a circle is infinite in itself.
That's a singularity shape.
There's no beginning or ending to it.
Yeah.

(01:14:46):
It's a singularity shape.
It has no end.
That's why wedding rings are circles.
Well, no, somebody created the meaning after the wedding ring was created.
People started wearing rings and like, you know what?
I've got a brilliant idea.
I think this means this.
Yeah.
It's just a thing that I can put on this phalange here on the end of my arm.

(01:15:13):
These big knuckles right here keep it on, so it's not going to fall off, but let's create
more meaning around it.
Yeah, let's tell a story.
Let's get a story to make it more meaningful and maybe create more meaning around it.
If that's the purpose of life and we get a singular AI who comprehends above us, it's

(01:15:34):
going to give us these things.
It'll probably give us a better meaning for a round wedding ring.
This is the reason you need it.
It's like, this is a better meaning.
That's the only reason it would work on that is if it actually does create more meaning.
Yeah.
I don't think we need to know any better meaning for a wedding ring except what it symbolizes,

(01:15:59):
but if there is something, I don't know.
And maybe the people that wear a wedding ring are 20 times happier than someone who's not
wearing a wedding ring.
And so get yourself a wedding ring.
But to do it, you got all this heartache and challenge and misunderstandings, but we've
got these things to teach you to help you get around that too.

(01:16:22):
Of course, that's what this whole, the great conversation is all about is so we can gain
more meaning in our life.
That's the reason the philosophers started writing from Cicero and Aristotle and Archimedes
and anyone who's written in the past and thought, we're rolling it up so that we can think better
now.
Yeah.
So here we are trying to make more meaning in our lives.

(01:16:47):
But I think that's all I have to talk about singularity.
Yeah.
That was a good one.
I learned a few things.
I think I can do things differently.
Just a few things, just a couple of things differently now that I know a little bit about
that.
Yeah.
AI is still scary, but you know, the concept of an airplane was scary and now it's a

(01:17:12):
normal thing that everyone's okay with.
And I mean, some people die every year from airplane accidents, crashes, malfunctions,
but people still fly.
So maybe if AI is trustable enough by enough people, even though it might kill some people

(01:17:32):
every year, we'll still enjoy it.
We'll still find it useful.
I would find it difficult to have AI kill anybody.
You never know.
You can't.
I mean, the worst it could do is a machine could fall on somebody.
The robot.
Whoops.
The AI computer fell and crushed someone.

(01:17:53):
It fell over and it killed these three people.
That's it.
No more AI.
It did it on purpose.
Or a robot going rogue.
And those are movies about that too.
But I think if we think about it, you don't need to fear AI.
It's not even marginally frightful.

(01:18:17):
In comparison to a bunch of other things that I'm totally okay with doing, like driving
my car and...
Like living in a country with...
With a dictator.
...a thousand...
Well, fentanyl kills a hundred thousand people a year.
Oh, okay.
In a country that has fentanyl in its borders.
Right.
I don't know.

(01:18:37):
Yeah.
You agree to drink all the resins in the coffee, even though they're tearing apart your cells.
The inside of me.
I probably have cancer and I just don't know it yet.
Right.
Right.
And the coffee's just making it worse.
And the sugar.
The sugar is what really feeds cancer.

(01:18:59):
So...
Yeah.
Next week, I want to see if we can figure out how to get me to stop judging people.
Yeah.
Is that what it is about next week?
I think it would be...
It'd be nice to overcome this flaw that I have.

(01:19:20):
So we need to talk about it.
Yeah.
So we need to find out where that comes from and why it happens.
Right.
And maybe even if it's even bad.
I think I heard someone say judgments are just happening all over the place, but then
there's another part of it that you need to consider.
Maybe it's not the judgment that's bad.
Maybe it's the action after the judgment.

(01:19:41):
But we'll have to talk about that.
Something.
Yeah.
I'm going to...
What are judgments?
You're right.
Yeah.
So...
So...
I'm looking forward to being a better person after our next conversation is what it is.
I want to fix this.
Well now we've framed it up that you're a singularity in yourself and so you can do
that.
I can.

(01:20:02):
You have the right.
You have the ability.
Yes, I can.
I have the ability if I decide to stop judging.
But that'll be the question.
Whether we make it decide to stop or start or...
Whether I'm...
That's going to be a good conversation.
... okay with it.
It's how it's supposed to be.
What judgment is.
What judgment is.
[outro] So I'm going to go ahead and close us out here.

(01:20:24):
Thank you for joining us with the podcast.
And for those of you who are listening, we're happy you're here.
We're happy for everything that you've been putting into our brains as we've been talking
for communicating with us afterwards.
There is a...
In the notes, the connection.
Do you have a minute?
Conversations at gmail.com.

(01:20:45):
I believe.
And so reach out, make a comment.
Tell us if you know us.
And if you don't know us, don't look because we talked about that a few minutes, a few
podcasts ago.
But enjoy the conversation.
We've had fun and we'll be talking in judgment next week.
Thank you for listening.

(01:21:06):
Bye.
Have a good day.
Bye.
Advertise With Us

Popular Podcasts

Stuff You Should Know
My Favorite Murder with Karen Kilgariff and Georgia Hardstark

My Favorite Murder with Karen Kilgariff and Georgia Hardstark

My Favorite Murder is a true crime comedy podcast hosted by Karen Kilgariff and Georgia Hardstark. Each week, Karen and Georgia share compelling true crimes and hometown stories from friends and listeners. Since MFM launched in January of 2016, Karen and Georgia have shared their lifelong interest in true crime and have covered stories of infamous serial killers like the Night Stalker, mysterious cold cases, captivating cults, incredible survivor stories and important events from history like the Tulsa race massacre of 1921. My Favorite Murder is part of the Exactly Right podcast network that provides a platform for bold, creative voices to bring to life provocative, entertaining and relatable stories for audiences everywhere. The Exactly Right roster of podcasts covers a variety of topics including historic true crime, comedic interviews and news, science, pop culture and more. Podcasts on the network include Buried Bones with Kate Winkler Dawson and Paul Holes, That's Messed Up: An SVU Podcast, This Podcast Will Kill You, Bananas and more.

The Joe Rogan Experience

The Joe Rogan Experience

The official podcast of comedian Joe Rogan.

Music, radio and podcasts, all free. Listen online or download the iHeart App.

Connect

© 2025 iHeartMedia, Inc.