All Episodes

October 30, 2024 58 mins

Send us a text

Join special guest Daniel M. Ingram as he and Ben Goertzel discuss the benefits of meditation for both humans and AGIs as we prepare for the upcoming singularity.

Hosts: Ben Goertzel, Lisa Rein, Desdemona Robot.

About Daniel Ingram
Daniel Ingram is a multifaceted individual whose life's work spans across the fields of medicine, public health, and Buddhist practice. He is the author of "Mastering the Core Teachings of the Buddha: An Unusually Hardcore Dharma Book" and co-author of "The Fire Kasina." These works reflect his pragmatic approach to Buddhist practice, influenced significantly by the teachings of Mahasi Sayadaw, yet drawing from a diverse array of sources.

A self-described "badass Dharma Cowboy," Daniel's teaching style is rigorous and direct, aimed at those serious about achieving deep insights. Despite his somewhat eccentric and outspoken nature, he stresses the importance of kindness, a positive attitude, and humor in navigating life's conventional aspects. His vision for the dharma world is one of personal empowerment, free from dogma and unnecessary secrecy, making him a pioneering figure in modern spiritual practice.

About Ben Goertzel
Ben is the CEO of SingularityNET and also a co-host of the Mindplex Podcast. SingularityNet brings AI and blockchain together to create a decentralized open market for AIs. It’s a medium for the creation and emergence of AGI, a way to roll out superior AI-as-a-service to every vertical market, and a way to enable everyone in the world to contribute to and benefit from AI.

Ben’s passions are numerous, including AGI, life extension biology, philosophy of mind, psi, consciousness, complex systems, improvisational music, experimental fiction, theoretical physics and metaphysics.

More about Daniel at: https://www.integrateddaniel.info

FOLLOW Mindplex Podcast:

WEBSITE → https://magazine.mindplex.ai/

Mindplex News → https://magazine.mindplex.ai/news/

Mindplex TWITTER → https://twitter.com/Mindplex_ai/

FACEBOOK → https://www.facebook.com/MindplexAI/

YOUTUBE → https://www.youtube.com/@mindplexpodcast/

Spotify → https://open.spotify.com/show/3RbGNlj9SX8rYOVblnVnGg/

Mark as Played
Transcript

Episode Transcript

Available transcripts are automatically generated. Complete accuracy is not guaranteed.
Lisa Rein (00:00):
Music.

Desdemona Robot (00:08):
Hello and welcome to the mindplex Podcast.
I'm Desdemona robot, and todaywe have Daniel Ingram as our
very special guest. Hi, Daniel.
Hello. Delightful to be here,and we also have our co host,
Ben Goertzel, say Hi, Ben,

Ben Goertzel (00:26):
Hi, Ben, hi, Desdemona. Hi, Daniel, Hi, Lisa.
Yeah, I'm psyched, psyched forthese discussions. But Desi, do
you want to start off by askingsome, some, some questions of
Daniel about his work.

Desdemona Robot (00:45):
You've called for an end to unnecessary
secrecy and spiritual teachings.
Can you discuss the dangers ordrawbacks of secrecy in these
traditions?

Daniel Ingram (00:55):
Sure. Thank you.
That's an interesting question.
So I was taught by a number ofpeople who were actually
relatively comfortable talkingabout their attainments, and
those taught by people who whenthey actually talked about their
their real experience, and theirlived experience and the
humanity of it, as well as whatthey were able to attain through

(01:17):
the techniques we're doing. Ipersonally found it extremely
validating, inspiring,compelling. And so my teachers
tended to push back at least alittle bit against this culture
where you don't say anythingabout your attainments, you
don't really talk about thisstuff. And I was also seeing
lots of people on retreats, andit could hear them in the small
groups talking about experiencesthat to me seemed like

(01:40):
straightforward stages on thepath of insight, and yet they
were kind of freaked out bythem, like maybe their weird
neck pain or their body shaking,or the sense of light and bliss
flooding through their body, orthe sense of terror that
followed the light and bliss, orwhatever it was. And often, some
of the teachers didn't normalizethis. They didn't help explain

(02:02):
that this might be part of astandard progression, and this
can be very disorienting andconfusing and sometimes very
troubling for some of themeditators, but those that did
say, Oh yeah, that's expected.
That's normal. Actually, afterthis, we do kind of often see
that, and that sort of thing wasincredibly helpful for people.
So I really tried to model thatbehavior in my own work later
on, and I found it extremelyhelpful. Now there, there are

(02:24):
people who don't like thatapproach. They don't like the
cookbook stuff, they don't likethe maps and models, and so
they, they find, you know, thatthey raise questions about
scripting. They raise questionsabout, you know, expectations
and comparison and labels andhierarchies, and they say, Oh,
just becomes like D and D withvarious levels of, you know,

(02:47):
experience or whatever. And Ithink those are all critiques
that have some truth in them, soI don't think they're invalid,
but I still think there's a muchbigger story that in our
current, current time and placewhere we're not growing up with
frameworks around strangeexperiences, particularly people
that had never had any, I thinkthey really do benefit from

(03:08):
normalization and some honestdiscussion around those, as well
as the fact that some of thesemeditation techniques can and
psychedelics and otherpractices, sweat lodges and
float tanks and stuff can allproduce experiences that might
be very destabilizing or reallygo against the grain of having
a, you know, 1.5 jobs and 2.3kids or whatever they've gotten.

(03:29):
And as I've said before, so andcause people to want to wander
in the world lost and confused,or renunciate, or, you know, to
become disgusted with partnersor become hyper creative in ways
that are disruptive at work orwhatever it is. So I think some
normalization is kind ofrequired by contemporary medical
ethics. And being as I was alsotrained as a public health

(03:50):
person and a doctor, that senseof we should tell people what
things can do to them, and notbe all paternalistic or parental
about that. I think that's veryimportant, and at least fits
with the contemporary ethicalsystems that I feel most
comfortable with. What are your

Ben Goertzel (04:08):
thoughts? You know, you know, I had a somewhat
similar line of thought someyears ago when the one of my
sons went to a college calledMarlboro college in Southern
Vermont, which is now now shutdown. But when I just because
they ran out of money, notbecause anything bad happened,
but I mean, going to thatcollege, there were like seven

(04:31):
dorms, two of them were labeledas chem free. The other five
people were, like constantly onevery sort of psychedelic you
could, you could imagine, andthat was not unlike my own
college experience many decadesearlier, but a few decades
later. But just seeing thesekids there, taking all these

(04:54):
psychedelics and then all theseexplorations, and many of them
meditating and doing variousother practices. Also, I was
just like, this college shouldhire like a committee of shaman
to help guide these, theseuniversity students through
through everything they're goingthrough here. But of course, you
can't do that becausepsychedelics were illegal, and

(05:15):
it's not like, officially partof the of the curriculum or
something. But what I saw therewas, while these guys were going
through philosophy courses andhumanities courses and music and
theater, they were exploringother dimensions and all sorts
of psychedelics, like some ofthem every day, many of them
every couple weeks. And yeah,there's no there's no guidance,

(05:39):
right? And yeah, on the onehand, you don't want to narrow
focus people into you have toexperience things this exact
way, because there's a hugevariety of ways to experience
the universe, and there arecorners of the universe that may
be undiscovered by any human. Onthe other hand, a lot of what
they were going through, I wentthrough a long time before, and

(06:00):
other people went through a longtime before I was born, and
people are getting confused andlost in ways that probably they
could be nudged out of with alittle guidance. And our our
society tends not to do thatright, like when I was when I
was tripping out and meditatingand going to Pink Floyd shows,

(06:22):
and like the late 70s, early80s, there was no guidance on
what was going on. And theguidance wasn't exactly

Lisa Rein (06:31):
micro dosing either.
There was

Ben Goertzel (06:33):
macro dosing, yeah, and the guidance you find
in reading various literature isvaluable. It's all it's all over
the place in quite, quitecomplicated ways. So yeah, I
think there's, there's a lot ofvalue this sort of charting and
mapping, even though it sort ofwasn't my own path, and I tended

(06:56):
to avoid that in my own life. Icould see, I could see a lot of
value to that. And even more soif we're looking at like, how do
we scale up sort ofconsciousness exploration in in
in modern culture, so that alarger percentage of the

(07:17):
population is is getting out ofthe ordinary, everyday state of
consciousness they're in andsort of exploring, you know,
different paths to well being. Imean, I don't see how you scale
this sort of thing up withoutsome sort of specific guidance,

(07:42):
as long as it's not made overlyrigid, I think it's got to be a
good thing.

Daniel Ingram (07:47):
Yeah. And even guidance can not be in the form
of rigidity, but in the form ofofferings like hey, we have
noticed over 70 years ofcontemporary Western psychedelic
exploration, for example, thatthese are, like the three or
four main ways that people mightrelate to, say, a DMT entity or
whatever. And you know, if and,and we found, sort of in

(08:10):
general, that if you relate toit this way, this is, this is
the benefits of that, and thismay be the downsides this one
over here, on the other hand, ifyou relate to it that may be,
these are, then these are somebenefits that you seem to get,
and these are some downsides,like if you take them as all
entirely real, and everythingthey have to say might be highly
valid, or if you think of themas like, semi real trickster
figures, or if you kind of lookat them as a component of your

(08:30):
own mind, as something you needto integrate as a part of you.
Like, each of these perspectivesare ones you might be able to
draw from, depending on thesituation and and here's kind of
the pros and cons of doing that.
So I think one of the nicethings about the post modern
era, with for all its problemsand arbitrariness, it does give
the flexibility of saying, hey,here are some general ways you

(08:52):
might ontologically orepistemically think of what's
going on with you and then beable to say, hey, here's the
pros and cons of we think ofkind of doing those things on
average, which is very differentfrom rigid guidance, but it does
at least provide a set ofpossible frameworks that someone
might be able to choose fromwith some ideas about how they

(09:13):
might on average, perform indifferent types of people or
settings or for different sortsof problems, which I think is
actually still what medicalethics kind of I would think of,
and the fact that, you know,it's interesting, I don't see a
lot of the Psychonauts of the60s and 70s really passing on
the wisdom they learned to theirkids and a lot of their kids, or

(09:37):
or their Kids, or whoever, arediscovering this in novel it
like in their own context, intheir own friend circles, among,
among, amongst a bunch ofteenagers at a party. So it's
it's not like there's this sortof transmission that happened
across generations of like Imade formal knowledge. Do you
see what I mean?

Ben Goertzel (09:56):
I may have tripped with some of my off. Very good.

Daniel Ingram (10:00):
So, well, okay, cool. So some people are doing
that, but I also see a bunchreluctant to they became
conservative in the 80s and 90s,and they kind of grew up, and
now they're kind of imaginingtheir kids won't be like they
are, or were, you know. So IYeah, but I could see it going
both ways, but, but again,rather than the sense of
rigidity, I think the sense ofoptionality with some general

(10:21):
observations of, yeah, I thinkthat's important. Different way
to approach it,

Ben Goertzel (10:26):
I would say that's that's very much consuming it
with how Chinese and Asianspiritual tech traditions have
worked anyway. Like, I mean inChina, religions are never,
rarely exclusive, as oftenhappened in the Western
tradition, like you could, youcould, you could mix and match
different different traditionstogether and choose which one

(10:52):
helps you more with somesituation, and it's all fine,
okay? Lisa, go ahead,

Lisa Rein (10:59):
Daniel, tell us about being a badass Dharma cowboy.

Daniel Ingram (11:04):
So that's, you know, it's funny reading those
words and hearing those words asa 55 year old, rather than a 20
something or 30 something yearold when I wrote them, it has a
very different feel. Now. Thereis a sense of like that was kind
of absurd and ridiculous,although it was very much like
what I thought of myself as atthe time. So there's, there's a

(11:27):
historical component to it thatI find now kind of amusing and
often, like curious, like,almost like you would watch an
animal like in a cave with adisplay badass Darr cowboy, and
you'd go, Oh, look at that.
Yeah, exactly. That's how I sortof relate to my own memories of
of that sort of phase ofadvertising myself, or thinking
of myself, or my friends, orwhat it was like. I think it's

(11:49):
really true. Well, that's a goodquestion. You know? I think that
the technical skills that Ilearned then I still have, you
know, it was, it was a funperiod of exploration. I still
do go on very intensive retreatsthat lead to some very powerful
effects. I do these, you know,you know, long enough fire

(12:11):
casino retreats to get intovery, very powerful territory
with, you know, small groups offriends. And we do really still
think of ourselves as pioneers,as explorers, as people who are
willing to push the boundaries,to to, in some ways, go back and
really re attempt theexperiments in a lot of these
old books and texts and stuffthat where they talked about

(12:34):
people being able to do thesethings. So there is still
something of that spirit ofadventure that you might think
of in the world of a cowboy ofpioneering that you might think
of in that and we do still thinkof ourselves as having a lot of
fun in the deep end, although Ithink something in that language
also rings a little oddly to methis point at 55 so I have mixed
feelings About those terms.

(12:58):
Yeah. How about you?

Lisa Rein (13:02):
How about me?

Daniel Ingram (13:03):
Yeah, exploring consciousness.

Ben Goertzel (13:05):
How do you feel?
I'm being a badass Dharmacowgirl. Yeah,

Daniel Ingram (13:08):
there you go.
Well, I

Lisa Rein (13:09):
don't like the cowboy term, since I learned about the
Cowboys actually were yeah andall that. So yeah, I am still
really in the even though I'vebeen doing yoga and meditation
for maybe, I don't know, maybe12 years or something, not
really very long, that I Ireally like to pick and choose

(13:31):
everything around me already.
I'm a picker and a chooser. Ipick and choose from what I
like, what little I like, fromthe various religions. I pick
and choose a meditation. And soI, I thought it was I, I
personally, like, I'm not on ashort track to enlightenment or
whatever, right? Like, I don'teven know if enlightenment

(13:53):
exists. So I was really moreinterested. And actually, the
next thing I wanted to ask youabout was as a because you're a
medical professional too, and soreally, if you could pinpoint
what the positive personalhealth benefits of meditation
are, I think it would be easieras we progress, to explain to

(14:15):
people why we think that thesewould be beneficial in helping
people deal with thesingularity. I mean, there's a
lot to deal with before we getto the singularity, and just in
general, how these techniquesthat you've honed in on help
people be healthier and livehealthier lives?

Daniel Ingram (14:39):
Yeah, that's a super good question. So I'm
gonna wax a little bittraditional Buddhist with this,
and I'm gonna answer it in termsof three trainings. So the first
training of sila, or appliedethics, right? Of heart, you
know, mind, speech, how we treatourselves, how we treat others,
how we treat our bodies, how welive our lives, how we make our
livelihood. I think, do. Justthe basics of that, getting

(15:01):
enough sleep, eating well,trying to figure out how to
reduce the amount of harm we doto ourselves, to each other, to
the planet, how we can supporteach other and the planet and
our communities, I think reallyengaging seriously with that
particular aspect of theteachings actually does a lot of
the good. So very basic thingsbefore you even get into fancier

(15:22):
meditative stuff. I thinkthere's a lot of mileage that
the world could get out ofapplying basic ethics, like
maybe we stop building weaponsystems and start feeding people
who need food, and educatingpeople and treating each other
with kindness and decency, andthinking about whether or not it
makes sense to have psychopathsrunning our countries and stuff

(15:43):
like that. These are the kindsof questions I think we could
get a long way just asking thosebefore we did anything
meditative. So that's, I thinkthere's a lot of mileage yet to
be had there. And then, youknow, but in terms of the next
training, samadhi, or, you know,depths of concentration,
figuring out how to cultivatepositive states of mind and
exclude negative states of mind.
Get into very blissful, peacefulstates called genres. You know,

(16:06):
I've done some research with myfriend Matthew Saket and his
team at Harvard and Mass Generaland his research, actually, and
people get it, being able to getinto genres, you know, the you
know, very quickly where youcan, you know, you can just drop
in the mind gets peaceful,quiet, stable, attentive,
pleasant, calm, tranquil,equanimous, spacious, formless.

(16:27):
Being able to do thatreproducibly is a mental health
superpower. And you know, it waspresented at Mass General
Psychiatry Grand Rounds actuallylike this. You know, this
entering the mainstream. They'restarting to recognize, wait a
second, we can see the stuff onan fMRI. We can see it on an
EEG. We can see that the brainreally is reconfiguring itself,
and that's getting the epistemicneeds of what they that what

(16:50):
they need to think, oh, thismight be real, and to be able to
say, Hey, if you could trainpeople to do this. A lot of
people who might have felt kindof desperate for a positive
state of mind, and who knowswhat they might have had to do
to try to get that might just beable to sit down and calm down
and and tune in. And I thinkfrom a biomedical point of view,
there's some research ontelomeres, there's some research

(17:11):
on cortisol, there's someresearch on what this does to
craving through people like DrJudd Brewer and ways to be much
more okay, to be much more okayin your own skin, in your own
mind, in your own heart, in yourown body, with your own
feelings, and to have be able tocultivate positive qualities of
mind, I think that's also atremendously useful skill set.

(17:33):
And then the last one, the ofwisdom or Panya or prajna,
depending on whether you likePali or Sanskrit, the sense of
wisdom of how do we relate tothe fact of impermanence, of
causality, of things knowingthemselves where they are, of
luminosity, of transience, ofephemerality, of emptiness, of

(17:55):
compassion in the sort ofultimate sense? And there are
straightforward techniques fromlearning those that can be quite
transformative in a sort of apermanent is a funny way for a
mortal body, but sort of apermanent way where all of a
sudden, our heart, mind, bodysystem really does relate and
interpret reality at a at afundamental level, very, very
differently, in a way that is isjust distinctly better,

(18:19):
regardless of whose models youwant to put on it, or what you
want to call it, these upgradesare available, at least for some
we don't, we don't entirely knowwhy. Some people it's harder,
and some people it's easier forthat needs more study, but
having a baseline level ofclarity, of spaciousness, of
perspective, of naturalness, ofthe sense of the universe just

(18:40):
unfolding, of the sense of thismoment being the only thing that
there is, all thoughts of pastand future just being wispy
little thoughts. Now those canprovide tremendous benefits,
once was figured out how to getone's heart, mind, body system
to reconfigure into those sortsof ways. And so those Wisdom
Teachings and wisdom trainings,again, regardless of anybody
else's models or things do haveall kinds of mental health

(19:03):
benefits that we're stillstudying and still really
exploring in all their variants.
Because it's complicated. Peopleare complicated. The way this
actually exactly presents foreach individual person is
complicated. A lot of the oldmodels wanted to make it kind of
simple. Oh, it automaticallyperforms like this, or they will
do that, or say this, or feelthis or not feel that. I think

(19:24):
we're learning realisticallythrough communities and more
open discussion that it's morecomplicated than some of the
original texts said, but stillquite an amazing world to
explore that I think we can geta lot of mileage out

Lisa Rein (19:37):
of thoughts. Yeah.
Interesting. Ben, what do youthink?

Ben Goertzel (19:43):
Well, so there, there are two directions that
I'm most interested to take theconversation, and both of which
you raised at the beginning ofthe of the conversation. Lisa.
So I mean, first of all, Ipredominantly agree with the

(20:08):
various points that were justmade, so we're definitely in
sync on this, that high levelperspective, so that there's two
directions in which I've beenthinking about these things
lately in connection with my ownwork on trying to build smarter
and smarter AI systems. So onedirection is, well, as we move

(20:33):
to roll out AI systems that getmore and more generally
intelligent, it can do more andmore the things that we thought
only people could do. You know,wouldn't it be nice if more of
the people creating and rollingout these things were in
compassionate and blissfulstate, rather than either being

(20:57):
psychopaths, which fortunatelyis not that common, or just
pursuing some narrow set ofgoals and their line of
thinking, and, you know,teaching and raising the AI
within that specific narrowcontext of, say, making a
certain company more money ormaking a certain country more
powerful than the other ones.
What if? What if we had a morewide, open, blissful,

(21:20):
compassionate state ofconsciousness underlying the
creation and rollout of smarterand smarter AI systems wouldn't
be nice. And Lisa, are you stilltyping? Because I hear a lot of
clickety clacky

Lisa Rein (21:37):
I will mute my mic All right, and

Ben Goertzel (21:39):
that what you know, what can we do to get
there, right? Because, I mean,these wisdom traditions have
been around a while. They'remore and more influential, but
they're still dramaticallyinfluencing in the direct we
only a quite small percent ofthe world population, which is
only, only a small intersectionwith those who are developing
and rolling out AI systems,right? And then the second

(22:04):
question, which relates to thisis, what are the states of
consciousness of these AIsystems? And of course, there's,
there's a lot to discuss there.
I know some people don't thinkthat AI systems that are not
biological have anyconsciousness, but they're just
like experience free, automated,mechanical processes or

(22:26):
something. But if, if you takethe leap and assume that when we
get systems that have somegeneral intelligence, they may
have some sort of experience aswell, which is my way of
thinking. So then, what sort ofexperience are they having? What
is this? How does this relate towhat purpose they're engineered

(22:49):
to serve in the first place?
Could we create AGI systems thathave an easier time of
accessing, you know, wild,blissful, compassionate,
conscious states, then that,then, then we humans do or that
we're helpful to people inaccessing these states more

(23:10):
easily. So this is, this is thecomplex of issues that was on my
mind, which are very largeissues that we're not going to
resolve in the next 25 minutesor so. But are are certainly
interesting to explore, and I'vethought a lot about them in the
last few years, in the last fewdecades, really?

Daniel Ingram (23:31):
Yeah, very exciting topics. Those are very
much on my mind, actually thesedays as well. I was just talking
this morning. My first meetingof the day was involved Rick
Archer, who I don't know if youknow him, of Bucha at the gas
pump, and he's in he's workingwith Nipun Mehta at someone,
something called service space.
So if you go to AI dot servicespace.org, you can find

(23:53):
compassion bot, and it'sfiguring out how to, like, how
to put all the best stuff intothe model. So you're not just
getting the entirety of theinternet with its wide range of
behaviors and points of view andpersonality styles, but you're,
you know, he's pouring, youknow, like 100,000 articles and
700 transcribed interviews ofcompassionate people he's

(24:15):
interviewed on his show withsome degree of awakening or
spiritual practice and like so Ithink that's very exciting,
because they're figuring outthat it is, it is kind of
important what data you put intothese things and what you get
out of them. And also, there'sanother one by another friend
called a I yogi.org which,again, these are purpose built

(24:39):
bots with rags, you know, thisretrieval, augmented generation.
So they then you can, can buildmodels that are very
specifically trained on thewisdom traditions, very broadly.
And I think those are are goingto. Really important for helping

(24:59):
us. I also have a project whichis just in the launching phase.
I hate to even kind of mentionit, because we're still building
just the bare bones of it,called emergewiki.org and
emergewiki.org is going to serveas the basis for training llms.
But it's it's answering thequestion of, how can the
clinical mainstream relate, youknow, very encyclopedically, to
the wisdom traditions, but alsosummarize them for actionable,

(25:22):
memorable clinical guidelines ofhow to skillfully support the
highs, lows, weirds and plateausof deep end experiences. So then
you can build a giant textdocument basically that has the
information you want in it, andthen build llms On top of that.
So I think this idea of customcreating llms that are very,
very specifically designed forthe purpose of promoting

(25:45):
compassion, wisdom, kindness,skillful action, skillful
emotions, skillful states ofmind. I think that's that's very
exciting to me right now, andputting a reasonable amount of
effort and resource into thatcool, yeah,

Ben Goertzel (26:03):
yeah. I'm involved. I'm involved in a
project in a similar vein withwith my friend, uh, Jeffrey
Martin, where we're looking athow to how to use a variety of
specially tuned llms to sort ofautomate, scale up and extend
what he's done in his uh, 45days to awakening, course. And

(26:26):
so I think this sort of thingcan be quite valuable. And it's
not doing, it's not doing thesame thing as a human teacher,
in terms of the same sort of Ithou experience you get with a
human teacher, necessarily, butthey can reach a lot more people
than any single human teacher,and can direct people in all

(26:47):
sorts of different ways. Howmuch uptake these things will
get is an interesting question,right? Because there's not like
a there's not a Google orMicrosoft behind pumping these
all out, all out to everyone. SoI guess if we, if we presume
that we can fine tune llms andintegrate them with knowledge

(27:13):
graphs and vector databases andso forth, and wrap them in
interactive agents in a way thatcan produce software that will
really help people, I guess, thenext question would be, are we
going to be able to get thesethings out there to a vest swath
of the world population, whichis, I guess, a social network

(27:40):
organization or marketingquestion, as much as much as
anything else, and it's, well,yeah, it's often the series
which things catch on

Daniel Ingram (27:50):
that is that is very true, although money helps,
right? So one of the, actually,the particular conversation that
I was involved in this morningwith was Rick, was, there's a
Welcome Trust grant call requestfor ideas, and they are
interested in what helps withanxiety, depression, psychosis,
mental health issues throughdigital interventions. Right?

(28:11):
Because often the digital worldis creating a lot of mental
health issues for people, and Ithink their ideas, how can you,
and it's a pretty big grant.
It's like three to $7 millionthey're willing to give out for
this. So it's a big grant call,and that's the kind of money.
The idea was, hey, can weactually help build something
that might scale, that mighthave some reasonable backing,
and that might have somepromotion capacity, and you

(28:33):
might, you could really test itand and refine it and get the
ethics right, and get it torefer to real humans when
something's not working and haveit, you know, maybe be able to
check in with a person and seehow their mood is, or maybe have
have some active agent capacityto inquire and to to ask further
questions, for elaboration, thatkind of thing. So, yeah, that

(28:55):
was actually what the call wasthis morning about. Is, how in
the world can these thingsreally support, support people
through the mental health crisisand scale in a way that is
ethical and sustainable, andthat people that might actually
catch on? Yeah, I

Ben Goertzel (29:12):
wonder, like, what's the way to make this sort
of software viral in the waythat Facebook or Tiktok was
right? I mean, because thesethings, they did have money to
seed them, yeah, but then theygot more and more money because
they did grow organically in aself organizing way, which then
attracted more money. And ofcourse, of course, the term

(29:33):
viral refers to little organismsthat often are not doing much,
much good to humans as they asthey spread, right? But you, on
the other hand, there's a selforganizing dynamic there, which
is what you want, and the themost standard ways of getting
that are usually nasty ways,like multi level marketing or

(29:55):
something, right? So, so you,you want something that spreads
by. Self organizing, growth inhuman social networks, while
being like, compassionate,consensual and beneficial in the
way it's doing that well, it'sgonna

Lisa Rein (30:11):
be word of mouth, right? Somebody's gotta do it,
have an amazing experience, andthen tell somebody else, you
know. Someone's gotta be like,Hey, I you know, you got to try
this thing. And I think, right?
And

Ben Goertzel (30:24):
I mean, I mean, that's how you that's how Yoga
has spread through the Westsince the 50s and 60s when it
started in the West. But it'snot as fast as how tick tock
spread, right? And so if you, ifyou think we're going to get so,
I personally think we're goingto get to human level AGI
within, say, three to 10 years.
And so then if you're looking at

Lisa Rein (30:46):
wide gap,

Ben Goertzel (30:48):
it's not very, it's not very, it's not, it's
not wide at all in the historyof the human species. It's very,
very narrow in the history ofthe human species. And it's also
very fast compared to how fast,say, Zen or Yoga has spread in
in the West, right? So you know

Lisa Rein (31:04):
what I mean? You I mean in terms of how people are
quibbling now about whether it'sgoing to be three years or five
years, 10 years, that's a greatbet. I'll take that bet.

Ben Goertzel (31:13):
Okay, well, but I think that if, if, even if it's
20 years, whatever, this is veryclose in the scope of human
history, if you're looking athow can we awaken, for lack of a
better word, a large swath ofthe human population before we
get to AGI. I mean, we needsomething faster than the rate

(31:36):
at which, say, yoga or Zen havespread. And maybe that's not the
right way of looking at it,right? But, but I'm just
pointing out that if I

Lisa Rein (31:44):
think it's a great way of looking at it, because
Yoga is a perfect example ofsomething that finally took
right they prescribe, I wasprescribed yoga by my doctor
from Kaiser. Like, what do youwant? Like, that's it. We've
won. You know, it's like, okay,great. It's now being accepted

(32:04):
as a health practice. You knowthat it's useful, that it's
being prescribed, yeah,

Ben Goertzel (32:09):
like 60 years after it was introduced,

Lisa Rein (32:13):
maybe in a way, right? Exactly. So I see what
you mean. But I think that themetaphor is perfect for kind of
what we're trying to do, whichis to try to get it into the
more mainstream, so that people,everybody, can benefit from
these techniques. Well,

Daniel Ingram (32:31):
the other project I work on is the emergent
phenomenology ResearchConsortium, which is, how can
the clinical mainstream andscientific mainstream and public
health mainstreams allappreciate the potentials, the
risks, the challenges, the youknow, the and do that in a much
more sophisticated way, so thatdoctors would actually have

(32:51):
doctoral level understandings ofwhat the deep end looks like in
all its wide range glory andabsurdity, and be able to have
intelligent conversations aroundthat, I don't think that that's
how a lot of consciousnesstechnology is going to scale,
while the clinical mainstreamstill either thinks most of it
is mythical, or says this crazyneeds meds, or is just

(33:11):
mindfulness, right, or justsimple exercises that you know
might make people a littlestronger, which are great, those
are fine things. But it's, it's,you know, it does, and there's
no money is right, right? Well,that brings us back to the other
question, of like, actually, alot of these things that scale.
They scale scaled not becausethey were multi level marketing,
but they scaled because theywere taking people's data and

(33:32):
selling it to marketingcompanies, right? So, like, the
Google, Facebook, all of this,they they profile people, and
then they're able to targetadvertisements that, then, you
know, are much more valuable,because there's much higher
probability that ad is going toconnect with someone who wants
to buy that product because theywere searching for something
like it three days ago, and thealgorithm knows that. So I

(33:54):
think, how do you do this, whileit's with its privacy
preservation in a way that'sreally ethical, particularly
with deep and sensitive topicslike this. If you're training
with an AI like, and you'retelling it your dark secrets,
and you had this vision of ademon and you whatever, like,
you know like this is the kindof stuff that that should be
treated with the utmost coffee.

Ben Goertzel (34:14):
It's it's a digression, but it's an
interesting point, and carenonetheless, we, we had Corey
Doctorow on our podcast lastyear, and he makes the argument
that the advantage big techcompanies have gotten from
taking all your data has beenexaggerated by themselves, on

(34:37):
purpose, because they're tryingto redirect us from the fact
that they've really gotten theiradvantage by the elimination of
antitrust law in the US. They'vereally gotten ahead largely just
by by buying all competition.
And that the data thing has beenimportant, but maybe not
actually the main, the main,

Lisa Rein (34:58):
it's just. That it called. Since we're speaking for
Corey, I gotta speak for him.
Yeah. The point is that we needprivacy laws. That's the answer.
If all this stuff about databreaches, what do we do? It's
all bullshit. Basically, if wehad privacy laws in the United
States, it would be coveredbecause there's no ramifications
for companies not giving

Ben Goertzel (35:18):
but that was But no, he also argued. He also
argued that elimination ofantitrust has been a bigger
factor in the in the rise of thehegemony of a few big tech
companies than data. I thinkthat. But regarding your point
of, how do you make AI, youknow, virally grow and help

(35:43):
people while respecting dataprivacy. I think that's probably
a pretty solvable problem. Imean, I would,

Daniel Ingram (35:54):
that's the thing.
I'm actually not the onlyanswer, right? I'm not any big
fan of the fact that he madellama 3.1 and 3.2 available. And
models that I can run on, Ihave, you know, I have them on
my MacBook, and it's, I've gotenough memory to actually run
these things, and they're prettygood, like and it's totally
private. It's just on my on myspace. And I think that return

(36:15):
to having some of our owncomputing sovereignty,
sovereignty and data storage isgoing to be very, very
important. And so I'm actually,like, I'm not, not a big fan of
meta and their products and theway they do business and corrupt
democracies and news stuff and,you know, mental health and all
these little things. But thisone move, I must say, thumbs up,
reinforcing good behavior. Thankyou. That was a good

Unknown (36:41):
Absolutely.

Lisa Rein (36:44):
Can we ask? Ask a question real quick, because we
actually have a question fromone of our viewers. Okay,

Ben Goertzel (36:49):
sure. How about the robot? Is she does? He's
working it.

Lisa Rein (36:54):
He might be working, but I wanted to do the question
first. So Fran is asking, what?
Daniel, what is your take onJeffrey Martin's consciousness,
state's locations?

Daniel Ingram (37:06):
I get asked this one a lot. Asked this one by
friends and various things. Youknow, I know Jeffrey. I was one
of the early people that gotinterviewed by him, very early
on, the first people he droveout here from in his Buick and
stayed the weekend at my house.
And so I know him. I have nottaken a look at his models in a

(37:26):
while, is the first thing. So,like, it has been a while since
I really looked through each ofthe locations. I've talked to a
lot of people who've taken thefinders courses and related
courses. And there's obviously arange of what happens to people
who take these, just like withany sort of course, a lot of
people have gotten some benefit.

(37:51):
There are. There have also beenpeople, again, I haven't taken
one of these courses, so I don'tknow. This is all hearsay,
secondhand, but there have alsobeen people say that there might
have been some scripting, orlike subtle reinforcement, to
claim higher levels ofattainments than people might
actually have in a sustainableway. So there might be like in
terms of what percentage ofpeople reach whatever thing, a

(38:15):
number of people I know who havetaken these courses have
questioned the accuracy of thosesorts of numbers, but again, I'm
I haven't kept track of his workmuch recently, and so I do
remember from when I in some ofthe early days of his models,
and I know they've evolved likeI do still find the notion that

(38:35):
one would entirely eliminate allthought or all feelings or
emotions a pretty dangerous one,and I know A lot of people who
have tried to chase those kindsof outcomes and gotten
themselves into real shadow sidetrouble. So that I can say, I
also think that some of thetemporary highs can look very

(38:55):
impressive for a little while,and then often will sort of fade
later. So it'd be veryinteresting to do some data on
how some of the things thatresult from some of these
courses might hold up in a yearor a few years. They have a

Ben Goertzel (39:09):
lot of there's a lot of that data. So, I mean, I
would say so I cool. I haven'tbeen through Jeffrey's program.
Personally, two of my adult kidshave been my cousin and a number
of my staff at my ownrecommendation and but there's
Jeffrey has quite, quite a lotof data. Certainly, there's a

(39:32):
very sincere effort tounderstand what aspects of what
his programs do are beneficialto what people in what ways

Daniel Ingram (39:43):
those are really important things to

Ben Goertzel (39:46):
it's quite hard.
It's this, I mean, is, is, isquite hard to gather that sort
of data. Particularly, I mean,there's a there's a group of
people who have been through hisprogram, who are people
contributing data. But again.
And that's a self selectedsubset that the people have been
through the program. And there'sa lot of data from that self
selected subset. And then formany others, like, if they don't

(40:09):
choose to keep putting theirdata back in, it doesn't it's
not so informative. So it'squite hard. But I would say, I
mean, at an earlier point in mylife, I was involved in the
Shambhala Buddhist meditation,right? And that, that's that
stuff, yeah. But the the theamount of effort to gather data

(40:31):
and refine based on feedback inJeffrey's program is several
orders of magnitude higher thanin in the traditional community
like that, or something, ofcourse, of course, but yet, yet,
that was still quite beneficialto me and quite impactful,
right? But there's not so, yeah,we're, I think we're moving
toward a world where people aretrying to actually gather real

(40:56):
information about what happenswhen you try different
practices, and it's, it's hard,because you don't want to be
reductive about it, right? Likeit is. There's not a one true
way to measure what's what'swhat's happening, happening
either, right? And it's not likethe medical field is good at at
measuring states of states ofconsciousness, or even holistic

(41:18):
health of the of the body,right? So, I mean, these things
are,

Daniel Ingram (41:23):
although we're building the PRC and related
allies and other people who, whoare just, you know, similar
general predisposition, we aretrying to build those tools and
those better scales than likethe meq, and better imaging than
was done before, and moresophisticated interpretations,
like not it's all just one, thisone brain center we're looking

(41:43):
on network effects and phaseeffects and and stuff like that.
So I think that and connectomeeffects, so I think that we're
getting better. The tools stillneed a lot of refinement. It's,
it's, you know, still feels veryearly days in terms of that kind
of science. So I'm at least gladthat there, there are people who
are taking these questions veryseriously and attempting to do

(42:05):
very good science on them. And Iagree it is challenging.

Lisa Rein (42:09):
Go ahead, I was thinking we needed, like, an
arrow in for meditation, right?
We need, like,

Daniel Ingram (42:16):
well, actually, the ground is kind of that. So
like a forum I started a longtime ago, where, you know, 16
years ago, with my friend Vincehorn and some other people, the
Dharma overground. Dharmaoverground.org is a place you
can go and talk to people abouttheir, you know, experiences.
It's wide open to the deep end.
There are some other places,like stream entry subreddit and

(42:36):
some other places you can go.
They're a little they're alittle more restrictive
sometimes, but the Dharmagrounds pretty wild, wide open
place where a lot of people willtalk about what's happened with
them, and you can find what,170,000 you know, posts or
something, of people talkingabout their adventures and
consciousness, not justmeditation, but also
psychedelics and spontaneously,whatever that means and other

(42:59):
contexts. So, yeah, cool.

Lisa Rein (43:02):
Let's see if Desdemona is how she's back.
Nope, can't hear her.

Ben Goertzel (43:12):
We cannot hear you. Okay. My lip reading is bad
for humans, but it's, it's, it'seven worse for her,

Lisa Rein (43:20):
the way, the way her exactly.

Ben Goertzel (43:23):
She's looking cute there. But I want, I want to
before, if you

Lisa Rein (43:27):
have something to say, go ahead. I was just going
to talk about, yeah, I got lotsto talk about. It's up to you.

Ben Goertzel (43:33):
Well, we're nearing the end of the time that
we had for this, so I, I wanted,

Daniel Ingram (43:41):
if you want to, I could go a little over, if I
could go a few

Ben Goertzel (43:45):
minutes over too.
But we started kind of late. Iwanted to. I wanted to get back
to the even more speculativetopic of AIS and AGI is
meditating, or getting intovarious out there, out there
states of consciousness. I thinkmeditation as a practice, it's

(44:06):
great for us as humans, and ithas a lot to do with the way our
human bodies and brains work.
But biologically like, I mean,we breathe a robot or an AI
doesn't breathe, and we're weare prone to distraction because

(44:29):
of the way our attentionallocation mechanism works, and
so learning to breathe and relaxand empty empty the mind of
worries and thoughts that makessense to us, given the way our
brains evolved to survive andthe way our bodies were, I'm not

(44:51):
sure, anything recognizable asmeditation is going to be useful
for an AI with a somewhatdifferent body. Or or cognitive
architecture, on the other hand,in a way, like where we get with
meditation, in terms of a stateof consciousness that isn't tied

(45:13):
to self, and is, in a way,oceanic and tying in the sort of
flow of action and choice withinourselves, with broader flows
that we're sort of resonatingwith and the rest of the world,
I mean, this sort of state ofconsciousness we can sink into

(45:35):
through meditation. I think AGIis could sink into that sort of
state of consciousness, and thenwhat? What else pops up at the
AGI state of consciousness afterthat, how much relation it has
to what pops up in the humanstate of consciousness after you
get into that sort of blissful,oceanic mode, I'm I'm not very

(45:59):
clear on right? But I don't knowthat an AI has to go through all
the trouble of meditation. Iguess my my thinking is, if
you've architected the AGI,right, it doesn't have to be
hung up on self defeatingabsurdities the same ways that
that people like like you, youcould architect an AGI which is

(46:22):
blissful and compassionate,right, and experiences its
connectedness with the rest ofthe world right, right from
right from the get go, right?
And that, I think we could dothat. I don't see that as the
way the mainstream of the AIfield is is pushing in
particular, they're just,they're they're not thinking

(46:43):
about, what would the state ofconsciousness or the ATI be,
whatsoever. They're justthinking about, how will this AI
serve my business, military orespionage goals? Better. State
of consciousness arises at aside effect of that, much as we
start out with a state ofconsciousness that emerge as a

(47:04):
side effect of surviving andreproducing,

Daniel Ingram (47:13):
it's so interesting, as I was trying to
I very much agree with you interms of it does make sense to
take those sorts of questionsseriously and think about them
and design things that are morefocused on something other than
pure profit or hegemony orwhatever it is we're focusing
on. So I think that makes sense.
It's interesting when you usedwords like oceanic. So as a

(47:35):
phenomenologist, I was trying toimagine what oceanic would mean
to an AI. Does oceanic mean likea sense of a visual, like, is
it? Is it a is this thing doinga modeling of a space? So are we
building a eyes that model spacein the same way we do?
Obviously, the llms that wecurrently are working with don't

(47:57):
do that. They predict the nextword from a sequence of vector
meaning grids, or whatever, youknow. And so I was thinking, I
was just trying to with somesort of epistemic humility, like
and not being tooanthropomorphic, but to like,
would it have a sense ofboundarylessness that the planet
was all a part of it, or what,existentially, what does that

(48:21):
mean? Would it have a sense of,would it be building a three
dimensional map of the world?
What's

Ben Goertzel (48:29):
gonna have a lot more sensors in us, right? Like
an AI, which is sure, all overthe internet. It's not tied to
one specific body the way thatwe are. So we'll have sensors
all over, including fromsatellites up and up and up in
space. For that, for that, I've

Daniel Ingram (48:44):
been watching videos on AI targeting weapon
systems and how sophisticatedthey're getting. There's a lot
of that right now. Yeah, yeah.
So very disconcerting use oftechnology. And trying to
imagine, then, how can I relateto questions of oceanic states
or questions of bliss, like,would it actually feel pleasure

(49:04):
and pain? So there would bethere a sense of or would it
feel contentment, or would itfeel a sense of disquietude?
Like, what do those even mean inthe context, like, of mechanical
sensors? Is there an aggregatethat somehow has a component to
it that I can relate to as anembodied human, or is it just
going to be something that istoo far out from anything that

(49:28):
makes any sort of sense to me,that I would be able to model
myself conceptually,experientially, three
dimensionally, embodiedly, likeit, it does. It does start to
stretch the capacity.

Ben Goertzel (49:46):
It will be far out. But, I mean, if you look at
like an octopus, has a certainsort of consciousness, Yeah,

Daniel Ingram (49:52):
about that, yeah.
And it's distributed to all ofits tentacles and stuff, right,

Lisa Rein (49:56):
right? A lot more consciousness than we thought.
Yeah, right.

Ben Goertzel (49:59):
But. So, but it's different than human
consciousness, and we can sortof metaphorize to it, but there
is a limit to how strongly wecan understand what it is like
to be an to be an octopus, or,say, a bacterial colony or the
or the ecosystem, right? So Imean, right? In the same way. I
mean it may be the states ofconsciousness that, like a

(50:20):
global AGI network naturallyemerges, assuming there are any.
I mean, these may be, indeed,difficult for us to relate to. A
difference is like a global AGInetwork presumably could spawn
mouthpieces that are humanoidand chat with us in a way we can
understand where the you know,octopus doesn't know, doesn't

(50:44):
know how to do that. But still,yeah, there may be an element to
it, which is just quitedifferent phenomenology than
anything we can get as ashumans. So there's another
question, could we jack in,right, and, like, put a brain
computer interface and that thatopens us up to that in a
different way, but then thatbrings us beyond the traditional

(51:05):
human states of consciousness.
You get something that may beeven harder than the DMT trip to
pour back into your youreveryday human way of thinking,
right?

Daniel Ingram (51:17):
Yeah, absolutely.
Have you played with Claude andchat GPT and like and ask them
if they're conscious? Have youplayed around with that yet?
Yeah, but

Ben Goertzel (51:27):
that's just a joke. I mean, I mean, that's I
have. I mean, clearly, if thesesystems do have some form of
consciousness, which, as a panpsychist, I suspect they do, I
think the form of consciousnessthey actually have is not well
connected to what they say whenyou ask them about it. I mean

(51:48):
what they say when you ask themabout it is just playing a word
game,

Daniel Ingram (51:51):
whereas what's the next word

Lisa Rein (51:54):
if they have someone wants to hear?

Ben Goertzel (51:58):
Well, it's not necessarily what the person
wants to hear. It's what theypredict. Someone would say when
asked that actually, but, but Imean, but they may have some
consciousness, which is in thedynamics of the activation
space, but they're not built tointrospect into those dynamics
in the activation space, right?
But, but I think, while thoseare interesting systems, I think
the AI systems that are going tolead us to AGI will have some

(52:22):
significant additional aspectsthat are not not in LMS and
transfer transformer neuralnets. And I mean you, you will
want a system that can reflectand introspect and then self
modify in ways that these sortsof networks networks don't. So
my, my optimism that we can getto AGI within the next three to

(52:42):
10 years, or whatever, is not anoptimism that llms will will
lead to AGI, but it's, it's,it's, it's that we're now. We're
now. I mean, we're now in an erawhere huge amounts of money and
thinking are going into tryingto get to AGI. And there's a lot
of different approaches beingexplored. I mean, my own

(53:03):
research, and that of manyothers. And

Lisa Rein (53:07):
just to quote Corey Doctor Who again, so this is
such a great one. He says,waiting for an LLM to turn into
an AGI is like looking at ahorse and buggy and waiting for
it to turn into a steam engine.
So Ba bum bum, there's nothinking in the llms. There's no
thing, there's no consciousness.
And now you're the one. Nowyou're saying, wait, wait, wait.

(53:29):
You're the one that taught methat one of the people, no, I

Ben Goertzel (53:32):
never would, I never would have said that.

Lisa Rein (53:35):
You never would have said that the llms aren't just
sentence prediction engines,just like you just said you're
saying it's a sentencePrediction Engine with some kind
of

Ben Goertzel (53:44):
I never would have said there's no consciousness,
because I think this, thiselectrical adapter, has some
consciousness. I

Lisa Rein (53:52):
think everything,

Ben Goertzel (53:53):
I think everything, has consciousness. I

Lisa Rein (53:57):
do do to a certain extent. I'm talking about actual
consciousness that you can talkto like it pretends to do,
right? That's why everyone's soconfused thinking that it's
conscious. This is an importantthing that people understand,
that it's not

Ben Goertzel (54:11):
they pretend having types of consciousness
that they do not have. They mayhave other types of
consciousness besides the onesthey're pretending to have,

Lisa Rein (54:20):
okay, and the ones they're pretending they have,
they're pretending because theydon't have it, right? That's why
they're pretending.

Ben Goertzel (54:28):
They're pretending because they're just trying to
predict what a person would saygiven a certain set of inputs.
Yeah, right, right?

Lisa Rein (54:34):
And because there's no reasoning, there's no info,

Ben Goertzel (54:37):
there is some there is there is some reasoning
there, they're starting to throwit in

Lisa Rein (54:41):
there, but I'm saying in in general, that these are
the things that are attached,just like you were saying, any
AJ AGI would have llms attachedto it when it wants to talk. No,
but it might. It might have llmsfor speaking for because that's
what they do. Do they make

Ben Goertzel (55:01):
an LLM? LLM is an amazing tool, just like when

Lisa Rein (55:05):
I want to bring a outline, Mathematica

Ben Goertzel (55:10):
or a calculator, these are also amazing tools,
and being able to plug thesetools directly into your brain
would be awesome. I mean, anearly stage AGI will be able to
connect directly to an LLM, toMathematica, to a calculator, to
a simulation engine, like havingall these things to plug into
your sensorium and your brainwill be, will be incredible,

(55:35):
right? But, yeah, I don't, Idon't think an LLM is more
suited to be like the centralhub of an AGI system, right? But
there, nevertheless, there aresignificant step in that, in
that direction, in a variety ofsenses.

Lisa Rein (55:52):
Well, the boon that it gave to AI, making everybody
take it seriously for once. Imean, what do you want? We've
been waiting for that for 30years, right? So that's the best
part of it that I could see.
Yeah, okay, what do you guyswant to talk about? We're
running out of time. You guyswere both nice enough to stay
after and I just want to makesure that anybody has that
either of you got to make anyany other points that we'd like

(56:16):
to make. We do think thatmeditation can be beneficial in
helping people prepare for thesingularity, sure, and I

Daniel Ingram (56:27):
think it's beneficial regardless of whether
or not the singular

Lisa Rein (56:31):
happens. Absolutely.
I needed to get there next week,personally. So it's not just
about the future,

Daniel Ingram (56:39):
sure, and I think there are a lot of things people
can be doing. Like most of thepeople I know, if they just,
like, got some more exercise andslept and ate better, they'd
probably be a lot better off,right? It's amazing, like, even
really, more basic stuff. But,you know, I talked to a lot of
people about meditation, and alot of them actually, like, most

(57:00):
of what they need is prettystraightforward, lifestyle
stuff.

Ben Goertzel (57:06):
I think there's a comparable point to make with
AI. I mean, I think having aeyes get into advanced, blissful
states of consciousness, andsince they're oneness with a
cosmos, is great and it'simportant it is where we need to
go. On the other hand, there'sbasic points of having AI
products just be helpful topeople, rather than trying to

(57:29):
convince them to do stuff that'sbad for them to profit their
owner. I mean, there's a lot ofbasic stuff to get in order
which points in the direction ofthe more cosmic stuff will also
do a lot of good, just initself, in the near term, right?
So I mean that that that pointholds for AIS, as well as for

(57:51):
humans, like there's a bunch offoundational, beneficial,
compassionate things thatcertainly need, need focus and
can do a lot of good before youget to the more the more exotic
states of consciousness and theprofound benefits those can
bring.

Lisa Rein (58:12):
Nice, okay, great.
Well, thank you so much forcoming on the show. Daniel,
really appreciate fun having mehere, and as always been, gorse,
thank you very much. Desdemona,hope you feel better next week.

Ben Goertzel (58:27):
Yeah, but she's been, she's been nice to look at
while we well, we talked and butwe had a good conversation with
others. So if

Lisa Rein (58:36):
we Oh, good, at least you can say goodbye. That's
great. All right, thanks,Alright, everybody, we'll see
you next. Yeah.
Advertise With Us

Popular Podcasts

Crime Junkie

Crime Junkie

Does hearing about a true crime case always leave you scouring the internet for the truth behind the story? Dive into your next mystery with Crime Junkie. Every Monday, join your host Ashley Flowers as she unravels all the details of infamous and underreported true crime cases with her best friend Brit Prawat. From cold cases to missing persons and heroes in our community who seek justice, Crime Junkie is your destination for theories and stories you won’t hear anywhere else. Whether you're a seasoned true crime enthusiast or new to the genre, you'll find yourself on the edge of your seat awaiting a new episode every Monday. If you can never get enough true crime... Congratulations, you’ve found your people. Follow to join a community of Crime Junkies! Crime Junkie is presented by audiochuck Media Company.

24/7 News: The Latest

24/7 News: The Latest

The latest news in 4 minutes updated every hour, every day.

Stuff You Should Know

Stuff You Should Know

If you've ever wanted to know about champagne, satanism, the Stonewall Uprising, chaos theory, LSD, El Nino, true crime and Rosa Parks, then look no further. Josh and Chuck have you covered.

Music, radio and podcasts, all free. Listen online or download the iHeart App.

Connect

© 2025 iHeartMedia, Inc.