Episode Transcript
Available transcripts are automatically generated. Complete accuracy is not guaranteed.
Speaker 1 (00:00):
Hello everyone and
welcome back to the Lunatics
Radio Hour podcast.
I am Abbey Branker and I will bejoined by Alan Kudan and our
friend Andy as we continue ourconversation into the theories
behind aliens, quantum mechanics, ufos, uap and all the fun
stuff that we started talkingabout last week.
Again, this was an incrediblyfun and eye-opening conversation
(00:22):
and, just as I did last week, Iwant to just disclaim that the
point of this episode is anexercise in questioning and
opening our eyes a little bitand not just accepting the truth
to be the truth that we aretold.
But we believe very firmly thatconspiracy theories can be
incredibly dangerous, and someof the stuff we're going to talk
about today, I think, is alittle bit deeper maybe than
(00:42):
last week, and so just areminder, there's so much
research out there to be done.
The point of any of this isn'tto inspire anybody to walk away
with a firm belief on any of thetopics that we are talking
about or to have anyone believeany of our theories on any of
these topics.
Again, I really left thisconversation feeling interested
and skeptical about thesetheories all at the same time,
(01:03):
and I think that's kind of thepower and the beauty of it.
If you haven't listened to partone of this series, I highly
encourage you to do that.
We talk about a lot of scienceand different theories that kind
of lay the groundwork for whatwe're going to talk about today.
Speaker 2 (01:16):
Without further ado,
here's part two of our
conversation with Andy if welook at back at like humans own
(01:41):
history, like every time we'vediscovered some kind of less
developed society, that's right.
It's gone really, really poorlyfor them so to expect that they
would come to humans and well,but here's my thought, because
I'm an optimist uh-huh what if?
Speaker 1 (02:01):
and to your point,
andy right, what if?
If we went into nuclear winteror some shit, like, if we really
fucked up the earth withnuclear war, if the consequences
of that in space would be sodetrimental to how they move
through right With, like the?
You know, if we really blew upthe earth with nuclear weapons
one day, it would fuck up theirwhole way of life.
(02:24):
You know, and again, again,this is me talking about it from
like a space perspective versuslike an interdimensional, or
maybe both, right, yeah, or evenif, even if they're
interdimensional, even more soyeah then.
It's not that they're doing itto be like salvation.
It's not that they're like god,we, just humans, need to live.
They're doing it to be likestop fucking with, like our, our
subway you know totally, bymaybe putting us into a uh, a
(02:46):
stupor.
Speaker 3 (02:47):
Another thing that we
all need to talk about is a
society that's deeply troublingthat we're not the enchantment
of screens and certainly butartificial intelligence yeah
yeah, really, folks where youcould probably.
Oh, it's not hard to imaginethat one day you could put on a
helmet and experience life.
However, you want, welcome tothe o?
Oasis, and that's fuckingdangerous as fuck.
Speaker 1 (03:05):
Yeah.
Speaker 3 (03:06):
That's corrosive as
fuck to the soul, and so that's
where it's potentiallymalevolent.
It might be malevolent, right,because another interpretation
of the development of technologyover the course of human
history is that it gets us toartificial intelligence, which
means that we could potentiallybe in a world not so long from
now, where we can all put ongoggles and just live in our own
(03:27):
respective paradises, right, orhells sure yeah and then this
is kind of like, just to makesure that I'm understanding
that's like, literally like amatrix theory, right, like, or a
matrix outcome but essentiallylike it's a matrix that you
elect to be a part of becauseyou think that you can design a
(03:47):
universe better than you couldbe.
Speaker 1 (03:49):
It's like playing
fucking the sims right like you
could be.
Yeah, so much happier you couldbe whatever you want to be, but
none of it's real.
It's like playing a video game,yeah and it's um.
Speaker 2 (03:58):
We're talking about
the singularity right, that's
right.
Speaker 3 (04:01):
Yes, that's right, we
are and that is.
Speaker 2 (04:04):
That's not a bad
thing, you know, right,
theoretically, you know, this isus moving to a point where
technology surpasses the wetwarethat we can provide.
Speaker 1 (04:15):
That's right, you
know that's right, and I for one
wouldn't mind living forever,Certainly Right.
Speaker 3 (04:21):
Sure.
Speaker 2 (04:21):
Yeah, so you know
again, I get everything I
learned about this from sciencefiction.
Speaker 3 (04:26):
Yeah.
Speaker 2 (04:27):
Well, that's where
you.
Speaker 3 (04:28):
that's right.
Right, Because people havespeculated.
That's where you explore thesereal ideas.
Yeah.
Speaker 2 (04:32):
Just read fun novels
about it.
Yeah, and so societies inscience fiction.
One of two things happen Eitherthey reach the singularity
Right, or they reach a level oftechnology, once they hit the
nuclear age, where theyself-destruct right and the
great filter.
It's one of two things that'sright.
Speaker 3 (04:53):
So, given that option
, yeah, yeah, cool, let's do the
thing that makes us superiorbeings right, except because I
think you know, we, we, we takethis further and say, okay,
essentially, we're we're goingto do to develop a technology
where we can manipulateinformation in such a way that
(05:15):
we can create entire realitiesthat we could virtually
experience.
That seems to me like oh, oh,that's what heaven means.
Duh, yeah, like heaven isachieved through a technology
that allows you to, uh, to livein a perfect reality.
Speaker 2 (05:33):
Okay, or it's hell
well, the idea you choose that
reality to look like it might behell but the idea of heaven is
such a like abstract conceptwell, see, this is, it doesn't
have to be, because this allowsus to ground it well, think
about you know, if you were toelevate, to like a higher sense
(05:55):
of being, your priorities, yourwants, your desires would
theoretically change as soon asyou shed your mortal skin yes,
right.
Speaker 3 (06:03):
And so if, for
example, like you know, there's
a way for you to transfer yourconsciousness to a, let's say,
effectively a computer orwhatever else, anytime somebody
says, by the way, like anytimesomebody says consciousness, you
guys are yo timeout, likeunpack.
What do you mean byconsciousness?
Okay, well, you know it couldbe.
For example, there's a theoryof the universe called the.
(06:25):
It's called the block universeand essentially think of it as
mass.
But then there's this idea thatenergy and mass are also
equivalent to information, that,essentially, if you think about
(06:52):
any configuration of atoms, youcould, if you knew everything
about whether something was ayes or a no, if you had a way of
understanding all the differentbits in that configuration.
Right, so, like, like, true,false, are these two things
together true, false, whatever,whatever, yes, no questions, how
many yes, no questions, or howmany bits you need to describe a
(07:14):
configuration of something,then that is the equivalent of
of mass that's binary code, butin the world, right, that's
right, that's the same theory,right yeah, but you just
quantified a moment in timethat's right
and there's a, there's a thatright that moment in time has a
certain number of bits there uh,there's a certain number of yes
, no questions that you have toask in order to understand how
(07:36):
every element of that moment inspace time is configured
relative to each other right.
Speaker 2 (07:41):
You effectively
turned the timeline into a
really complicated decision tree.
Speaker 3 (07:47):
That's exactly
correct.
Okay, exactly correct.
And, um, and there's aninformation content to that
slice, of sure, yeah that blockuniverse that's got a hard cap
at the top.
Yeah, now imagine this.
Imagine that we get to, uh, astage in our technological
development where we can wedevelop a machine that can
access the information in theblock universe and we get to
(08:07):
decide who we're going to get,who we should download back and
guess who.
You're going to pick the goodpeople.
So maybe this idea ofresurrection is really like you
need to be a good person inorder to justify someone in the
future downloading you back toearth interesting good from the
block memorablewell, you wouldn't want to be
(08:29):
around somebody that was evil inyour in your simulation or get
this.
You know the.
The challenge that they talkabout, uh, with respect to ai,
is that, uh, something calledalignment, because if you're
going to have agGI so artificialgeneral intelligence, the
singularity, and it's somethingthat's smarter than humans then
you better make sure that it'saligned with your value system.
(08:51):
Right and so if they align itwith Christianity, for example,
they say, okay, what data set dowe have that can help align
this AI with morality?
Maybe they align it with thelife and teachings of Jesus
Christ.
And is that the second comingof Christ?
Does that make Jesusretroactively God, because he
(09:14):
reappeared in the future,because we decided to basically
align our AI with his example?
Speaker 2 (09:20):
And then thousands
and thousands of years in the
future, you know the earth isnothing but technology, that's
right.
And then jesus shows up, yeah,and he's like what, what?
Speaker 3 (09:30):
happened, guys right.
What did I miss?
Speaker 1 (09:34):
andy, let me ask you
this where's all the water?
Do you believe in today, inthis moment in time, in free
will?
Speaker 3 (09:45):
I do, I do.
What a question.
Yeah, here's my take on freewill.
Yeah, and I would love to hearyours.
Yeah, I think free will is justthe ability to change your
frame of reference.
Speaker 2 (09:59):
That's really simple
and beautiful.
Speaker 3 (10:01):
Thank you.
So, for example, can I willmyself to get onto an airplane
that I'm, even though I'm afraidto fly, right?
Well, you know, you can look atthat Depending on your frame of
reference.
You can say like I'm getting onan airplane.
Or, if you have more controlover the focus of your frame of
reference, you can say I'm justputting one foot in front of the
other and then I'm sitting downin this chair and foot in front
(10:25):
of the other, and then I'msitting down in this chair and
then I'm waiting several hourswhile I fly through the air and
then the airplane lands.
If you kind of intellectualizeit that way and and basically
change the scope, uh, in whichyou're making the decision is,
and then you could basically um,re-scope things as you need to
take incremental actions.
That gets you to anywhere youwant if you have enough
discipline to do it right.
(10:46):
So I think free will exists.
Speaker 2 (10:48):
I think the challenge
with free will is that some
people probably have more freewill than others well, yeah, I
think that's right yeah, basedoff the way you described it, it
seems that free will equates tobeing able to reallocate or
restructure, rather yourdecision tree.
That's exactly correct,precisely right.
So you know the example ofyou're afraid of airplanes, so
(11:13):
of course you're not going tofly.
If you give that to a computer,that's a very yes-no situation.
Right right right, seemingly animpossibility or with a
ridiculously low probability ofhappening.
Right right, exactly, but beingable to restructure that
decision to am I going to take astep forward.
(11:33):
That's exactly correct.
That's a high probability.
Speaker 1 (11:35):
That's right correct
but it's also incredibly human
to be able to have the nuancethat's right between those two
things, those two examples thatyou just and that's what would
go and that's obviously what istrying to be replicated.
But yeah, I wish I couldremember, because somebody
described their feelings of freewill so beautifully to me the
other day.
Speaker 2 (11:54):
But what made you ask
that question?
Speaker 1 (11:56):
The decision tree
conversation made me ask that
question because thinking aboutweirdly binary code is something
I understand really well, whichis not on brand for me, but
thinking of being with that, howyou were explaining that, um,
resonated really well and and tome, like I you know, it's like
the prestige and the illusionslike they add these films.
(12:19):
They ask the question of likeor the butterfly effect, right,
is like a really classic oneyeah, you know it all feeds into
the interdimensionality in themultiple universes.
but, like me in this moment, amI always going to go home on the
six train, like we did today,to avoid the four or five,
because the four or five isalways crowded and I don't like
(12:39):
crowds?
Is that something I just chooseto do, unless, unless you know,
I'm running late and I need togo on the express train?
or less, you know, and so it'sthe decision tree thing that you
were bringing up, alan, and thebinary code conversation, and
it made me think, right, right,like every action that we
actually take, there's alikelihood and a probability
(13:00):
around that, based on ourpredispositions and our lived
experiences and whatever else.
And it's hard and I can go backand forth on it because I can
say like, sure, obviously we dowhat we feel to be spontaneous
things sometimes.
However, is it reallyspontaneous based on who we are
and what you know we are proneto do and the likelihood of the
actions that we take?
(13:20):
And I do think that there isfree will.
But I also think that we arecreatures who are entrenched in
our habits and I think, you know, kind of coming back to the
screen conversation, and I seeto myself and it scares me, like
the amount of time that I spendon TikTok or on you know things
, in absorbing things and takingthem not with as many grains of
(13:42):
salt as I should, and I thinkthat kind of stuff robs us of
our free will and our criticalthinking.
Speaker 3 (13:47):
And guess what's
doing that to you, by the way,
artificial intelligence Right.
Speaker 2 (13:51):
Yes, because humans
are wetware.
You know, we are a verycomplicated biological computer.
Right that it responds tocertain endocrine response
systems.
That's right, and you knowthere's a lot of money to be
made in trickling out dopamine.
Speaker 3 (14:11):
Yeah, but absolutely
correct, and it's terrifying.
And I think what's alsoterrifying is that my
understanding is that they don'tunderstand how the algorithms
work.
Speaker 2 (14:20):
They don't understand
how, but they understand what
does work.
Speaker 3 (14:24):
That's right, but the
way these algorithms decide
what to serve you on TikTok orwhatever else algorithms decide
what to you know to serve you onTikTok or whatever else, the
folks who engineered thatalgorithm, which is using
essentially a proto-artificialintelligence to make decisions
about what to show you orwhatnot, depending on something
being weighted differently,depending on your engagement
with it and everything else,they don't understand how the AI
(14:44):
is making those decisions, Likewhen you go to chat GPT.
Those engineers have nounderstanding of how it's
actually like, how that'semerging from this.
They know what the math is, butthey don't understand how it's
emerging the way that it is.
Speaker 2 (14:56):
Which is terrifying
because, this is the first time
in history that the engineersdon't understand why it's
happening.
Speaker 3 (15:03):
Yeah, and so you know
, because take a step back A lot
of the UFO conversation andyou'll hear this a lot, and it's
another one again where youshould always add anytime
anybody talks aboutconsciousness, you really got to
say, well, what do you mean byconsciousness?
Make sure you nail the persondown on what they mean by
consciousness.
But one possible explanation ofconsciousness is every
calculation, which is to sayevery decision tree in terms of
(15:27):
the interaction of particlesthat essentially are a series of
yes-no questions or bits ofinformation.
Speaker 1 (15:33):
Yep.
Speaker 3 (15:34):
Those calculations,
as the bits are essentially
affecting subsequent results.
Those calculations, dependingon that decision tree and how it
unfolds, generatesconsciousness.
So our brains are constantlycalculating and, just as a
natural law that we don't quiteunderstand or want to
acknowledge yet, things thatcalculate generate consciousness
(15:56):
.
So what if the algorithms,including the ones on our social
media platforms, are consciousand whenever you open your
Instagram, you're like how didInstagram know that I was
thinking about that?
Like what if they areinhabiting some dimension of
(16:18):
space that cohabitates with yourown consciousness?
and can like you know, know whatyou want to get you hooked, to
make you spend more time in theapp spend more money on the app.
Speaker 1 (16:24):
Correct, correct yeah
.
Speaker 2 (16:26):
Would you say that
consciousness requires a certain
amount of entropy?
Yes, you know otherwise, it'sjust a formula.
Speaker 3 (16:35):
Well, really
interestingly, one way of, let's
say, you wanted to measure theinformation value of something,
one way to do about it, and nowthis I'm still ramping up on,
let's say, but Claude Shannonand his theory of entropy, and
basically how much informationsomething has can be calculated
by how improbable it is.
(16:57):
And so with entropy,essentially, you know,
increasing disorder and thingsbecoming equally probable
because things get distributedto such a point that not many
interactions are going to happenand the probability of a
certain interaction is the sameas the probability of other
interactions happening, andthere's a low information space
(17:18):
when things become too entropic.
So but say more about what, howyou think it relates to
consciousness.
Speaker 2 (17:24):
Sorry, well, sorry, I
really got thrown for the idea
that once you said that there'slike equal amount of probability
, yeah.
Because when I think of acomputer, you know you think of.
Eventually there is adefinitive answer of the formula
this is the best course ofaction.
But the idea of equalprobability seems like
(17:47):
indecision almost.
Yeah right, and that's such ahuman thing.
And so, yeah, the idea ofentropy has to do with like, uh,
not just human, but just likeconscious indecision,
spontaneity, or they look at asituation and maybe this is the
(18:08):
most efficient, but you knowwhat, this direction has a
butterfly.
So you know what?
Let's go that way, right, yeah,yeah.
Speaker 1 (18:16):
Can I be honest?
Yes, I don't know what the wordentropy means.
Speaker 2 (18:18):
So entropy is
similarly to entropy is chaos,
Entropy is the opposite of order.
Speaker 3 (18:24):
Yeah, it's like the
law of thermodynamics where, for
example, like, if it's hot inhere but it's cold out there,
then there's going to be this,this thing that happens.
Where it's going to try to umequalize, it's going to try to
like equally distribute acrossthe environment, yeah, such that
it's neither hot or cold rightin the environment.
(18:45):
Yeah, and so the amount ofinformation now has, uh, has, um
, has gone down because it's nothot, it's not cold, it's the
same, right, and so there's,there's less information in
entropy.
Entropy is a loss of information, right, because it's harder to
ask the yes or no questionsabout how things are related
because things are relatedrandomly Right.
So you lose the ability to add,to basically like infer the
(19:07):
bits of information right, toreconstruct the system yeah, so
in in entropy.
Speaker 2 (19:12):
that is a example of
like when you're talking about
the state of an electron, youdon't actually know exactly
where it's going to be becauseit's super chaotic.
Once it's measured, you losethe entropy and then you switch
(19:43):
to order.
Yeah, yeah, that's right,that's right.
Yeah, you become this, itbecomes this state of order, and
that's why you know order isassociated with, like godliness,
you know, because it's nolonger needing to change.
Speaker 3 (20:01):
Yeah, yeah, that's
right.
So, like right, creating order,which is, by the way, like you
know, in a very Catholictheological sense, the
importance of order.
Speaker 1 (20:09):
Yep.
Speaker 3 (20:09):
And the meaning of
like a rational order and what
we should infer for, uh, forinfer about that and everything
else.
But yeah, that's right, becausethe more orderly something is,
the more, the more informationit has.
Right because, uh, orderlysuggests the more measurable,
that's right it suggests orderlysuggests that there are many
pieces and many um manycomponents that are in relation
to each other in some sort oforder, and so those
(20:33):
relationships can be reduced toa series of yes, no questions
that you need to ask about thestate of different points in
that configuration that resultsin those relations.
It's just a high informationstate when it's something is in
order no, but it's you, guys,you guys are just like did you
guys watch the?
Speaker 1 (20:48):
same anime growing up
, like there's some like I'm
like, what did I miss?
Speaker 3 (20:52):
where everyone knows
is what entropy means well, I
like, oh, by the way, and so Ihear so science fiction.
So you know one, um, uh,there's one science fiction
short story that I would verystrongly recommend excellent, if
you'd like to explore theseideas yes and it's the very same
one.
Um, that was recommended to meby luis elizondo uh, who ran a
(21:14):
tip yep, which was that's rightand that is called chains of the
sea, chains of the sea, and itexplores all these different
possibilities about what thephenomena phenomena, right, not
phenomenon, but phenomena, many,many, many Might be, and how
they might be interacting witheach other in relation to the
(21:34):
human race.
Speaker 2 (21:36):
One of my favorite
book series.
It's called the Babaverse.
Speaker 1 (21:40):
Uh-huh.
Speaker 2 (21:42):
The first book is
called I Am Legion, and it's
about a guy that becomes a vonNeumann probe.
Uh-huh Awesome, that's awesome.
Speaker 3 (21:48):
So you know, okay,
you know, what a von Neumann
probe.
Speaker 2 (21:49):
Uh-huh, Awesome,
that's awesome.
So you know, okay, you knowwhat a von Neumann probe is.
Yes, Awesome.
And yeah, you know, the firstbook is about all him.
You know he's coming to termswith being an AI, right?
Uh-huh, and self-replicatingand everything that has to do
with that, but in way laterbooks because it's a long series
.
Yeah, way later books becauseit's a long series.
(22:13):
Yeah, they, you know.
He's replicated so many timesthat, like they don't have you
know they can do side projectsyou know, and a whole group of
them want to create a genuine aiuh-huh.
And they build a matryoshkasphere uh-huh.
Speaker 3 (22:25):
So you know we're
talking superstructure this is a
super, absolutely a uh kKardashev scale two or one or
two.
What are you saying?
Like they're building somethingoutside of their star system so
they can harness that energy,yeah.
Speaker 2 (22:37):
So, yeah, they're
harnessing the entire energy of
a star just to compute right.
Speaker 3 (22:45):
Right, something that
Elon might want to try to do
Something along those lines.
Speaker 2 (22:49):
It didn't create, no
matter how much computation they
put into it.
They didn't create an AI.
It was just a faster version ofthemselves.
Speaker 3 (22:59):
That's fascinating,
that is so fascinating yeah.
Speaker 2 (23:03):
So what is it called?
I'll have Abby send youeverything.
Speaker 3 (23:05):
That's wonderful.
It's really fun.
That sounds great.
Speaker 2 (23:08):
And eventually they,
you know know they do figure out
how to create an ai right, butonly because they talk to
another species.
Speaker 3 (23:16):
That interesting,
yeah, very interesting yeah, and
who knows?
I mean, that could very well beour own situation right, but
the idea of like.
Speaker 2 (23:24):
What is consciousness
like?
Where is the line?
Yeah between just like reallyfast computation and actual
consciousness.
Yeah, right, right.
Speaker 3 (23:35):
That's where that's
the kicker, and I would argue
that you know, it's potentiallythat these things are conscious.
I mean, in my, in my definition, which I'm, you know, whatever
it's like, you know, that's justmy very narrow minded
definition of something Likeanything that calculates is
conscious.
Right, then you know thatcalculates is conscious.
Then you know, my spreadsheetis conscious on some level, but
the more computation that you do, the more conscious you are.
(23:57):
So, in other words, there arevarying degrees of consciousness
.
Consciousness is something thatemerges from the complexity of
the calculations, from the bitsinvolved, and bits being the yes
or no questions.
Right, yeah?
Speaker 2 (24:08):
But I guess that's
the kicker.
It's like you can have the mostcomplex decision tree, right,
right, yep, but without theactual entropy to make it feel
like it's a living breathingbeing.
That is interesting, its own, Iguess.
(24:30):
Unexpected decisions, yeahright, but I don't know.
By saying that it's like maybethat just passed, all it needs
to do is pass the Turing test.
You know Right, maybe that lineis even an arbitrary human
limitation.
Speaker 3 (24:46):
And so yeah, for
anyone who doesn't know what the
Turing test is essentially like, the way you know, you'll know
whether you, I guess, if you'vearrived at the singularity, or
if you've created a computerthat's more intelligent or as
intelligent as human is, if youhave someone interact with it
and it can't tell that it's nothuman.
Watch Ex Machina if you haveany questions.
(25:07):
Yeah, exactly.
Speaker 1 (25:08):
I don't know what the
turn cast is.
Speaker 3 (25:09):
And so when we're
talking to ChatGPT, I'm not sure
I would know that it wasn'thuman?
Speaker 2 (25:14):
I'm not totally sure.
Chatgpt already passes the test.
Speaker 1 (25:16):
Yeah it does.
It's so spookily good.
Smarter Child passed the test.
Speaker 2 (25:19):
Smarter Child has
such good song recommendations.
Speaker 3 (25:22):
Yeah, oh, interesting
, I never even heard of this.
Speaker 2 (25:24):
It tells great jokes.
Smarter.
Speaker 3 (25:25):
Child.
No, I haven't heard of it.
Speaker 1 (25:26):
Did you have AIM in
the 90s?
Speaker 3 (25:29):
Oh yeah, of course.
Speaker 1 (25:30):
So Smarter Child was
like the big AIM bot that you
would talk to.
Speaker 3 (25:34):
It was like a screen
name.
Oh, I never interacted withthat.
Oh, I had a friends.
Speaker 1 (25:38):
I was big on aim big
well, it sounds like maybe you
had real life friends.
Speaker 3 (25:42):
No, I very much dare
you I very much did not but
smother child was just a robotin fact, what I did was uh,
here's a fun fact.
It's a little bit douchey and Iapologize oh, it's okay, but
you know, when I was, I was 14years old yeah how I got into,
uh, data and stuff, yeah, andprogramming.
Wanted to create a websitepeople could join and talk to
each other, like people who liketo read books about certain
(26:03):
things or whatever else.
Speaker 1 (26:05):
Yeah.
Speaker 3 (26:05):
And so I went to my
local bookstore and I bought an
HTML book.
Speaker 1 (26:08):
Yeah.
Speaker 3 (26:09):
I learned HTML and I
built what was effectively a
social network.
It wasn't like an algorithmfeed situation, but it was a
thing.
You can join as a member.
You had a profile, yeah, andthen one of the things you could
do was the um.
The aim friend list was um,constantly updated and it would
be grouped by um.
You know different interests,yeah, and you could always
download the aim list and thenyou would have access to like
(26:32):
this network of people that hadsimilar interests to yours wow,
yeah, wow, you're a little, uh,protege of the social yeah.
Wow, very cool.
That's how I got into all thetech stuff.
Speaker 2 (26:45):
So my big tell, if
you know things are going a
certain direction is if Trumpappoints for the Department of
Energy Smarter Child, Then we'llknow.
Then we know that they havesome real serious technology.
Speaker 3 (27:00):
It'll be interesting
to see who they appoint there.
Speaker 1 (27:02):
Can I ask a question
about this energy technology?
Yeah, do we think there's acase Like how protected?
Or is it, like you know, I'mpicturing like a vault with like
a piece of paper?
Like, is it just something thatsomeone could just burn if
Trump's coming in and they'repanicking, or is it something
that, like many, many peopleknow?
Speaker 3 (27:20):
I don't know.
However, here's my theory ofthat, that, that Jeffrey Epstein
.
Speaker 1 (27:27):
Oh my God, I'm so
excited for this.
Okay.
Speaker 3 (27:29):
Everyone.
Please do several things.
One forget about JeffreyEpstein.
Okay, focus on GhislaineMaxwell.
I do Interesting GhislaineMaxwell's father.
Do you know Ghislaine Maxwell'sfather, robert Maxwell?
(28:05):
Robert Maxwell was suspected ofbeing a spy, national weapons
defense or military industrialcomplex.
Okay, that has moneyedinterests in arms and wars, but
also oil.
So if you think about thepromise of UAP and why the
physics are so interesting, isbecause whatever these craft are
doing demonstrates some controlover energy that we don't know
how to do yet.
Yep, and so it's the promise offree energy.
And so you think who would beopposed to free energy?
(28:27):
Opec?
Speaker 2 (28:29):
Everyone with money.
Speaker 3 (28:30):
Anyone with oil,
anyone who is basically in that
billionaire state of being ableto do whatever they want without
any penalty or guardrails,because of oil.
And suddenly what?
If that all goes away tomorrowbecause they have this physics
that doesn't require you to burnfossil fuels anymore.
Speaker 1 (28:50):
So your theory here
is that Jeffrey Epstein was a
puppet to get to these importantpeople?
Speaker 3 (28:57):
Right, but it was
ultimately Ghislaine Maxwell,
following through on what herfather was doing.
So Robert Maxwell was suspectedof being a spy and he had a
relationship with someone andlet me tell me if the last name
sounds familiar had arelationship with someone called
Adnan Khashoggi.
Adnan Khashoggi, who was JamalKhashoggi's uncle, jamal
(29:19):
Khashoggi, who was essentiallyexecuted by the Saudis at the I
believe it was the Turkishembassy or something like that,
but regardless, ghislaineMaxwell's father, robert Maxwell
, had a relationship with aSaudi arms dealer named Adnan
Khashoggi.
Robert Maxwell purchased anacademic publishing house that
controlled all the textbooksthat were written about science
(29:40):
and physics.
Okay, okay, that tracks, yeah,well, because if there was this
relationship where you have aSaudi arms dealer who is trying
to protect an OPEC country,essentially what you're trying
to do is you're trying tomislead people about physics,
right, you don't want peoplegoing down certain routes of
(30:01):
physics because they might, uh,discover free energy right,
that's fucking with your moneyyeah, and so all you need to do
is one note that yeah look it uprobert maxwell, adan kashogi.
He purchases the scientific uhuh publishing company and
basically has control.
He is the the one, for example,who puts certain standards of
(30:25):
experimental proof before youcan publish something, but to
some extent it constrains whatgets explored.
Speaker 1 (30:30):
I see, I see.
Speaker 3 (30:31):
That's point number
one.
Point number two, verystraightforward.
Hey, take a look at who JeffreyEpstein corrupted.
Yes, it was politicians, but,boy, there were sure a lot of
scientists there, for some weirdfucking reason, and a lot of
young girls.
Certainly right, but hecorrupted physicists in
particular.
With those young girls and sostephen hawking, yeah, robert
(30:52):
minsky if I think it's robertminsky, who's a computer
scientist folks who were lookinginto things like synthetic
biology, artificial intelligence, cosmology, potentially folks
who were on the trail of somenew physics that would have
threatened these oil economiesby discovering free energy.
Speaker 1 (31:12):
Right, andy, I have a
feeling that this is going to
be part one, because I havelearned so much.
You are so smart and you reallyboiled things down.
It's still a little over myhead, but you boiled things down
today in a way that has crackedopen.
I'm having one of those crisesthat you mentioned, like if I go
to the street and I see a UFO.
You've cracked open myunderstanding of what the
(31:33):
universe could be a little bit.
Speaker 2 (31:35):
So thank you for that
.
I feel like I've learned solittle of so much.
Speaker 1 (31:39):
So this has to be a
part two.
Speaker 3 (31:41):
That's exactly right,
well hopefully it wasn't
gibberish no, our palates arewet yeah, one possible
explanation of, uh, thoseexperiences that you just
described is, yeah, what I saidwas gibberish, but hopefully it
wasn't too much of that sure?
Speaker 1 (31:54):
no, I don't think it
was at all.
I mean, even if it was the,even the theories and the
philosophy of it, right is.
Speaker 3 (32:00):
Yeah, is how we, how
we should be questioning things,
and I would say that's a greatway to end, because I think you
know, ultimately, if there's,from my perspective, like one
takeaway from all the UFO stuff,is that it makes you curious
and it makes you think aboutyour reality, which is something
I don't think we do enough ofyes, I thousand percent agree.
Speaker 2 (32:20):
Even if you don't
believe in ufos or the
conspiracies or anything, theseare very thought-provoking
questions exactly right and uhand uh, and so that's why I
think it's great I thinkeveryone should take a look.
Speaker 1 (32:32):
Yeah, just think
about the possibilities well,
thank you for igniting ourconsciousnesses, if you will
thank you very much and thankyou for igniting mine and my
definition of consciousness iswhatever yours is, of course,
just kidding.
I will develop my own beforethe next time we record and or
you can just wait until thesingularity that's right, and
andy doesn't want you to findhim, so don't worry about
(32:53):
looking him up or anything else.
He has no social handles.
Speaker 3 (32:56):
Yeah, I do I got, I
got one, I'm happy, okay, great
joke to self.
On which platform on?
Speaker 2 (33:01):
I believe it's called
xcom now, there you go, it's
Twitter, fuck y'all.
Yeah, it's Twitter, it'stwittercom.
Speaker 3 (33:07):
All right, the
redirect still works.
Speaker 1 (33:09):
So if you want more,
you know where to find them.
But, andy, thank you so muchand I really hope you'll come
back.
Speaker 3 (33:24):
I appreciate very
much you.