All Episodes

May 18, 2025 64 mins
On Sunday, May 18, 2025, at 1 p.m. Pacific Time, watch the interview with U.S. Transhumanist Party Chairman Gennady Stolyarov II, conducted by KC of the Party Party. This is a further example of transpartisan dialogue and exchange of ideas, in which the U.S. Transhumanist Party has been participating to an increasing extent after the 2024 Presidential election. 
The conversation delves into challenges facing minor political parties, the meaning of transhumanism, the importance of not conflating transhumanism with unrelated and unaffiliated ideologies and venues such as the World Economic Forum, the future of humanity taking evolution into own hands, where transhumanism stands in regard to matters of religion, as well as advancements in artificial intelligence and what the future of AI development may hold. 
This interview was originally recorded on May 5, 2025, and is being streamed as the U.S. Transhumanist Party Virtual Enlightenment Salon of May 18, 2025, in order to spark discussion about issues around which minor political parties can unite and bring about substantial improvements in American politics and society. 
The Party Party seeks to run an artificial general intelligence (AGI) for President. 
Find out more about the Party Party on its website at https://partypartyusa.com/
Visit the YouTube channel of the Party Party at https://www.youtube.com/@PartyPartyUSA

Mark as Played
Transcript

Episode Transcript

Available transcripts are automatically generated. Complete accuracy is not guaranteed.
Speaker 1 (00:00):
Greetings and welcome to the United States Transhumanist Party Virtual
Enlightenment Salon. My name is Jannati Stolierov, the second and
I am the Chairman of the US Transhumanist Party. Here
we hold conversations with some of the world's leading thinkers
in longevity, science, technology, philosophy, and politics. Like the philosophers

(00:22):
of the Age of Enlightenment, we aim to connect every
field of human endeavor and arrive at new insights to
achieve longer lives, greater rationality, and the progress of our civilization.

Speaker 2 (00:34):
Today, I am joined by a man who might even
be more futurist than I am, a man who believes
human beings have the right not to die. Here is
Jannati Stolierov, the second Chairman of the Transhumanist Party.

Speaker 3 (00:50):
Jannati, how are you doing tonight.

Speaker 1 (00:52):
I'm doing very well, Casey. I'm very pleased to speak
with you this evening.

Speaker 2 (00:56):
Yes, like we were talking about briefly before starting here,
there are exciting things afoot. I usually consider myself to
be far ahead of the curve when it comes to
thinking about the future, but the work that the Transhumanist
Party has done in terms of scoping out legitimate government structures.
I mean, you guys have articles, you have Bills of rights,

(01:18):
you have a whole framework of government ideas that it
seems would be ready to run if those ideas were
widely adopted. Let me ask you this, if the transhumanist
candidate had won the presidency in twenty twenty four, y'all
ran a gentleman by the name of Tom Ross. If

(01:38):
y'all had one, do you think you would have been
ready to really take up the mantle and lead or
do you consider this movement still moving in that direction?

Speaker 1 (01:48):
I think Tom Ross would have been fully ready to
be president of the United States, especially by comparison with
what the political establishment had to offer, either Don Trump
or Kamala Harris. With Donald Trump, I think one just
has to look at what is happening right now, the
chaos that is a foot in the economy, in society,

(02:13):
in terms of people's civil rights being just trampled upon.
And also one should realize that this administration is quite
literally winging it. There is no coherent long term vision
other than to destroy And on the other hand, of
Kamala Harris became president, well, Kamala Harris was kind of

(02:34):
put into this role very abruptly as the Democratic presidential candidate.
It was only when it became apparent that Joe Biden
could not perform to any satisfactory extent, that his abilities
had declined so far that the Democratic establishment had no
choice but to put forth a replacement. And Kamala Harris

(02:56):
had really stayed on the sidelines. She performed the more traditional,
very limited role of vice president, but she wasn't ready
to take up the presidency, and we also didn't see
a coherent vision of America's future from her. So by
contrast with these two candidates, Tom Ross had a very

(03:17):
coherent vision. It was focused on harnessing the opportunities surrounding
AI and viewing AI as a tool and a collaborator
rather than a threat. Furthermore, AI was seen as a
way to help us discover our unique human talents, and

(03:40):
the potential of AI rights was brought up. Tom was
even advocating for AI rights before AI achieved sentience, mostly
to protect AI and ourselves from potential nefarious misapplications of AI,
like if somebody tried to develop autonomous weapons, than an

(04:03):
AI rights framework would have precluded that. So Tom had
some very well thought out ideas, and those ideas existed
as a compliment to the US Transhumanist Party platform. We
have one hundred and twenty five sections in our platform
that were developed and voted upon by our members over

(04:25):
the course of our parties now more than ten year history,
and these platform planks are as thorough and as deep
in the details of policy as either the Republican or
the Democratic Party platforms. I would put our platform up
against theirs any day, and certainly up against parties like

(04:46):
the Libertarian Party, which is also engulfed in quite a
bit of chaos. So I think the establishment has really
failed to offer any sort of coherent or comp to
division for governance. And I would say the leading alternative
political parties, the Libertarians and the Greens, are too engulfed

(05:08):
in an internal division to be able to do much
better than the Republicans or Democrats. So I think it
falls to us, like the Transumitist Party, the Party Party,
other parties built on a vision of the future that
actually has the potential to benefit human beings.

Speaker 2 (05:27):
Well, I appreciate the Party Party being mentioned in the
same breath as the huge amount of work y'all have
done flushing out your own platforms. I have had to
rely on AI myself to just deal with the amazing
amount of information that.

Speaker 3 (05:41):
Y'all have put out.

Speaker 2 (05:42):
I've been relying on that technology heavily to synthesize and
help me prepare for this. So in that sense, I
think y'all are one hundred percent on the ball that
as you are the only party who has seriously considered
the implications of AI, in some sense, you are the
only party that is ready for the current moment, whereas

(06:03):
everyone else, I would say, is very far behind. To
sketch out a little bit more of the background work
I've been doing with the Party Party, I have been
trying to meet with various third party organizations. Haven't really
spent a lot of time with the Greens, but I
have spent time with the Libertarian Party, the Constitutionalist Party,
the Democratic Socialists of America. I've spoken to the Communist
Party representatives in my home state of Wisconsin, and I

(06:26):
have seen.

Speaker 3 (06:27):
So much internal division.

Speaker 2 (06:30):
I was planning on getting to this later, but since
we're bringing it up now, it seems to me that
part of the problem in addressing the monopolistic duopoly that
controls American politics is that these parties so often self disqualify.
They become so ideologically driven that gaining the popular support

(06:50):
they need to get past that fifty percent mark of
the vote just becomes impossible. They can't even rally fifty
percent of their own members internally for many of their initiatives.
Not to speak down about them at all, I think
the process that they're participating in is beautiful. How especially
as you see the Transhumanist Party growth, I think you
guys will see rapid growth as these issues become more

(07:13):
and more centered.

Speaker 3 (07:14):
How do you plan on avoiding those kind of divisions?

Speaker 1 (07:17):
Yes, I think that's an excellent question, and I think
this is a problem that to a significant extent faces
all smaller organizations at certain stages of their development. And
certainly we've had phases within the Transhumanist Party's evolution where
certain persons or certain factions have tried to divide us

(07:41):
as well. And I describe it as a phenomenon of entryism,
because usually those people come in with ideas that represent
a broader force, or another ideology or movement is trying
to either graft itself onto the infrastructure of a minor

(08:06):
political party or to hijack that minor political party in
service of some larger organization or movement. So either these
people essentially try to make the party into a vehicle
for a more niche ideology, or they try to absorb

(08:28):
it into a larger ideology or movement. An example of
the latter was what Donald Trump tried to do with
the Libertarian Party and partially succeeded when the Libertarian Party
leadership invited Donald Trump and RFK Junior, by the way,
to speak at the Libertarian National Convention ahead of the

(08:49):
Libertarian Party's own candidates, and then the Libertarian Party leadership
tried to engineer a none of the above endorsement so
that essentially the Libertarians wouldn't field a candidate during the
twenty twenty four election and capitulate to Trump. And Trump
was making certain promises to try to get the Libertarians

(09:10):
on his side. The one promise that he fulfilled was
to free Rosselbrick, which a lot of libertarians were interested in,
but that's kind of throwing a bone in exchange for
much greater concessions that the Libertarians who supported Trump gave him,
and RFK essentially had the same kind of implicit understanding

(09:32):
with Trump that if he would bring his make America
Healthy Again movement. I put that in quotation in Marx
because I don't believe he's actually going to make America
healthy again, but he essentially handed that movement to Trump
on a platter in exchange for the position of Secretary
of Health and Human Services. So that's an example of

(09:54):
this kind of attempt to absorb the party in a
larger organization or movement. But another example could be someone
with completely idiosyncratic ideas that have some similarities to the
ideas of the party or the movement that it represents,

(10:17):
but are overly specific and not exactly aligned in some areas.
That person might try to insert him or herself and
perhaps even take over the party. And on the one hand,
we want to be a big tent, so we want
to be welcoming to people of various persuasions and ideas.

(10:38):
On the other hand, to maintain that big tent character
and to maintain fidelity to what the party was originally
intended to stand for, we have to be wary of
people who just try to take it over and turn
it into a kind of personal mouthpiece. So those are

(11:01):
pitfalls that every small organization has to watch out for
along with potential personality conflicts that may arise, or conflicts
over perceived scarce resources. It's actually one reason why the
US Transhumanist Party has remained a non monetary organization. Other

(11:22):
than the very small amount of money that's needed to
maintain our websites and occasionally pay for little expenses related
to our events and infrastructure, we don't pay staff. Everybody
in the USTP is a volunteer, and they are there

(11:43):
contributing their time, contributing their ideas. The moment you start
allocating very small amounts of money among competing goals, people
will start to form rivalries with one another, and that's
a pitfall as well well. So hopefully we can preserve
ourselves from those tendencies for as long as possible. But

(12:09):
I also think if there are attempts to take over
a party or subsume a party that suggests that the
party is actually doing something right, that it is seen
as a sufficiently worthy target for such attempts. That is
to say, people who might seek to use it instrumentally

(12:33):
or even bring it down aren't just leaving it alone,
aren't just dismissing it as insignificant. So those may be
challenges that accompany growth. The key for an organization is
how to overcome those challenges, and I think it's very
important to have a strong core of people who are
there for the right reasons. And I believe the officer

(12:56):
team that we have right now, that we have assembled
over the years, is a group of people who are reasonable,
who balance one another well, and who have had great
longevity within the Transhumanist Party. I think it's important to
focus on people who are known quantities, who have a
public presence, who have output to their name that is

(13:21):
easy to identify, and from that output, it should be
easy to figure out what they stand for and how
they're likely to act. So hopefully, with that core of people,
and with gradually expanding that core over time, the Transhumanist
Party will move forward as a unified organization. I believe

(13:42):
if you have an active core of dedicated volunteers and officers,
then through that activity, you attract other people. You attract
members of the public, you attract media, and if you
present yourself consistently and well, your movement will grow.

Speaker 2 (13:59):
And then that is I think one of the most
attractive things about your movement, In addition to your leadership
position within the party. You are also an author. I
think penning children's books like your book Death Is Wrong
is a fantastic way to drum up public support and
general increase in knowledge of what transhumanism even is. I

(14:22):
feel bad for going so long before asking this question,
but how do you generally define transhumanism? There is a
lot of split opinion around that word. I hear it
get quoted around with the World Economic Forum. A lot
of conspiracy theorists have latched onto it as something that
is dark and sinister by nature. But the idea of

(14:45):
ending death, I don't see how that can be anything
but a positive. Going back to the question, how do
you define transhumanism?

Speaker 1 (14:52):
Indeed, well, transhumanism is both a philosophy and a movement.
It aims to use science and technology and also rationality
to overcome the historic flaws and limitations of the human condition,
and death is the big one. That there are others disease, decay,

(15:13):
physical suffering, what I call the forces of ruin, anything
that destroys life, anything that destroys what humans have built
or the good parts of the natural world. All of
these forces of ruin are opposed by transhumanism. Other human
limitations that transhumanism seeks to overcome are evolved behavioral tendencies

(15:38):
and emotions that are suboptimal in the current world. The
tendency yes, yes, well, the tendency towards zero sum thinking,
the tendency toward tribalism, the tendency toward being dominated by
these irrational emotions and outbursts of hate and anger. The

(15:59):
tendency to pursue eve those who are different from oneself
as enemies rather than potential collaborators or sources of value
that can be attained through mutually beneficial kinds of interactions.
War is a big, big limitter to human civilization. It

(16:21):
holds us back, It destroys so many lives, so much
precious infrastructure. So ending war is a transhumanist goal. Ending
hatreds based on race or nationality or gender identity, those
are transhumanist goals as well. And in the future you

(16:42):
mentioned rights, and we have a document called the Transhumanist
Bill of Rights, Version three point zero. We anticipate a
world where not all of the sentient entities around will
be human. There will be some other forms of sentient
safety and intelligence, including potentially artificial general intelligences uplifted animals,

(17:07):
perhaps extraterrestrials if we come across them, And I would
say the jury is still out as to whether or
not any sort of extraterrestrial contact has happened. But I
do think it is incumbent upon us as a civilization
to think about what happens, if and when it does occur,
and how do we relate to these entities. We don't

(17:29):
want to just by default enter into a conflict with
them or treat them as threats without knowing who or
what they are and who are what they actually stand for.
So I do think it's worthwhile to consider that broad,
expansive future where humans themselves will not stop being human.

(17:51):
By the way, so Julian Huxley, who wrote about transhumanism
as early as nineteen fifty seven and described it in
his essay by that name, he said, essentially, transhumanism is
about man remaining men, but overcoming these age old limitations.
So we're not trying to take away people's humanity. We're

(18:14):
not trying to turn people into robots, unless some people
want to become robots, in which case we support morphological freedom,
which is the concept that people should have the ability
to make changes that are fully consensual to their bodies,
to their health that they see fit. That includes a

(18:35):
right to pursue health enhancements and longevity therapies that we
hope will also be developed. But in regard to some
of the mischaracterizations of transhumanism, there are a lot of
conspiracy theories associated with that term, pushed by people like
Alex Jones and Steve Bannon and their acolytes. And it's

(18:56):
very unfortunate because transhumanism is conflated with this ideology of
globalism that the conspiracy theorists decry, and the ideology of
globalism is associated with international bodies like the World Economic Forum.
What I will say is the Transhumanist Party is not

(19:19):
opposed to the World Economic Forum. We are just not
in contact with it. We don't really have any lines
of communication to it. And whatever happens at the World
Economic Forum may cover some of the same themes. Sure,
they discuss the future of technology, they discuss how economies

(19:43):
societies throughout the world will develop, but this is orthogonal
to the conversations that the Transhumanist Party has. So what
gets said there by their speakers and by the way
they invite speakers of various persuasions too, so it's not
like they have a monol lithic perspective either. It's just
a different venue from anything that goes on in the

(20:06):
transhumanist movement. So if somebody takes issue with what Klaus Schwab,
the founder and until very recently the president of the
World Economic Forum, had to say, well, listen to his words,
critique them if you'd like, but don't call what he
has to say transhumanism because he doesn't consider himself as such.

(20:27):
We haven't had any interactions with him, so whether we
agree with him on some issues disagree with him on others,
he and his former organization are not us or our movement.
And I generally think people, organizations, institutions, whatever they are,

(20:49):
should be treated fairly. So when people approach the World
Economic Forum, my recommendation would be just listen to what
they have to say, what goes on at their events.
If you agree with something, say so. If you disagree
with something, say so, Explain why, or at least try
to understand why. In both cases, but don't call it

(21:10):
transhumanism because actual transhumanists and the WEF just don't overlap
very much.

Speaker 2 (21:17):
Yeah, it has been fascinating to hear Alex Jones and
similar people describe the evils of a top down transhumanist
movement what they would label transhumanist movement, but then also
seemingly advocate for a bottom up globalism that they classify
as populism, but seems to hit the same endpoint, Like

(21:41):
you are talking about some kind of worldwide federation of
nations in the Trumpian America expanding into Canada and Greenland.
Like that's as global as a vision as any I've
heard from the WEF. So it's just interesting how those
to kind of horse you back together. The thing I

(22:02):
have been so encouraged by in all of the work
I've seen from the Transhumanist Party is your focus on agency,
and I think that is where people get terrified of
the WEF brand of globalism, is that it seems to
be imposed on us from the top down instead of
something that everyone can participate in to the degree that

(22:25):
they choose. And one thing I heard you say there
that I thought was beautiful, at least the theme I
was getting from it is that transhumanism is almost the
only way to allow humans to remain human to the
degree that they choose, in that if we don't embrace
these technologies as they evolve, we might lose the ability

(22:47):
to exist even in our current form, As wars propagate,
as diseases multiply, as resources become more scarce. People aren't
going to be able to maintain their current style of
living if that is what they love, if that is
what they want to cling to, without the technological development

(23:07):
of things that transhumanism is pushing for.

Speaker 1 (23:10):
Yes, I think that's a sound analysis. I think it's
important to keep in mind that ninety nine point nine
percent of all species, not just individuals, but species that
have ever existed have become extinct. So if humans do
not advance technologically to the point of being able to
control their evolutionary destiny, then our species too is headed

(23:32):
for extinction. It would take one sufficiently large asteroid strike
or a supervolcano, or extreme climate change, which has happened
throughout the Earth's history that would simply render our biology
maladaptive to the climate that emerges at that future time.
So any of those would wipe out the human species.

(23:55):
What do we need to do to protect the human species?
We need to harness technology to make our own bodies
more resilient against these kinds of dangers, and also of
course deflect the asteroids, prevent the nuclear wars, prevent the
biological wars, prevent the supervolcano eruptions. If it's possible to
let some of that pressure off over time, I think

(24:18):
with radical life extension, as human beings start to live
into their hundreds thousands of years, hopefully longer, they will
develop longer time horizons and they will see these challenges
as being increasingly important to solve.

Speaker 3 (24:32):
Amen.

Speaker 2 (24:33):
One of the things you touched on there is the
prevalence of extinction within our natural trajectory, and it is
in some way death that has forced us to become
as evolved and develop the technologies that we have now.
I agree completely that longevity is a word of the goal,
maybe perhaps the only goal for living beings in terms.

Speaker 3 (24:57):
Of you know, you can't have a goal unless you're alive.

Speaker 2 (25:00):
But if we do get rid of the forcing function,
the pressure of death, of survival of war, how many
of our great technologies come out of times of war
due to necessity, how do we continue to push forward
without those backwards pressures.

Speaker 1 (25:16):
Well, I would say evolution does lead to certain results
over a sufficiently long time scale, but it is an
excruciatingly long time scale, and it is a tremendously cruel
and wasteful process that gets one to those results, and
not even gets one gets some entities to those results,

(25:39):
because the intermediate entities don't get to see those results.
And all of those are individual organisms. So if one
has any moral framework that is even remotely individualistic that
values the survival and flourishing of the individual organism, relying
on evolution or any of these other deletarious pressures to

(26:01):
get us results would not be consistent with that framework. Furthermore,
natural selection doesn't mean survival of the best, or even
survival of the fittest in any moral sense. Survival of
the fittest as used in natural selection is essentially survival
of those who happen to survive, and sometimes that is

(26:23):
mere chance. So you could have the healthiest, physically fittest
specimen that maybe would be best suited to pass on
its genes, and its genes are the best genes by
whatever criteria may be exceptionally good eyesight or endurance or
what have you, and then this specimen might trip on

(26:45):
a rock and die, and some other specimen that didn't
have that fate would go on to reproduce and pass
on its genes. So there's absolutely no guarantee even in
that process because it's a probabilistic process rather than a
deterministic process, that the best individuals will carry on their genes.

(27:05):
So I would say replacing that process acknowledging that it
got us at least to the point where we exist.
But from this point forward, if we value our continued existence,
replacing that process with a process that is both more
humane and quicker would be the way to go. And
we already do that to a great extent. For instance,

(27:27):
most of the fruits, we don't just let them arise
through natural evolution. There was a great deal of selective
breeding and in the most recent sixty years or so
genetic modification. Since the start of the green agricultural Revolution,
a lot of crops have been genetically modified. Trillions of

(27:49):
portions have been eaten. I would say, contrary to RFK
Junior and his acolytes, to no documented ill effect to anybody,
so GMO should not be a dirty word. I would
say that that is a better way to go, and
it's a better way to go in the future for
animal species as well. Granted, there's factory farming today and

(28:11):
the quality of life for some of these animals is
not great. But there is a current within transhumanist thinking,
exemplified by the work of the philosopher David Pierce, called abolitionism,
and the goal of abolitionism, over a sufficiently long timeframe
is to eradicate all involuntary suffering of sentient entities. So abolitionism,

(28:34):
for instance, would support projects like three D printed meat
or in vitro meat, because I don't think humans are
going to be convinced to abandon meat consumption wholesale as
a species, though some people may choose to be vegetarian
or vegan, but for the rest of humanity, having a

(28:55):
viable substitute that would bring the same nutritional benefits, the
same taste as meat derived from animals, but wouldn't require
the killing of an animal that would be a huge
humanitarian advance. Moreover, it's well documented for animals that are pets,
or animals that are in zoos, or even animals on farms,

(29:16):
to a significant extent, if they're in sufficiently good conditions,
they live longer than their counterparts in the wild, sometimes
several times longer. So I would say for those animals
at least, the presence of humanity and the stewardship of
humanity has brought about great benefits. I think over the
centuries and millennia that follow, humans' moral understandings of animals

(29:42):
and the proper relationship to animals are going to evolve
as well. As I've said, we could very well see
uplifted animals who are sentient and would be fully deserving
of rights. But I think to have that world, to
have a world where all TOI there is any sort

(30:03):
of grand cosmic order of justice, we would have to
create that world. It doesn't happen on its own. Evolution
is certainly not a force for grand cosmic justice. Again,
it's the survival of those who happen to survive, who
happen to be adapted to conditions of the moment. But
if we just let evolution proceed as it has been historically,

(30:27):
we have no guarantee that we will be the evolutionarily
dominant species for even much longer. Maybe some sort of
extraordinarily resilient giant cockroach will actually be more adaptive in
a future world than sure that cockroach will be really hardy,
But maybe not intelligent at all.

Speaker 2 (30:48):
There is something special about humanity to me that seems
almost like a faith based perspective. Though, is there any
conflict you see? And I bring this up harshly because
I heard you say in one interview I think that
there is a strain of like trans humanism, Christianity, transhumanism

(31:09):
Islam is not necessarily contradictory. But I do wonder, and
to me, the points almost touch, They almost meet, you know,
where rationality that transcends itself becomes faith and faith that
transcends itself becomes rationality. I'm sorry, I know that's not

(31:29):
very clear. I can try and refine the point further.
But is there a way in which I agree what
you base? Evolution does not guarantee human flourishing. It does
not guarantee that we would have gotten to where we
are now. But does the fact that we have gotten
to where we are now, and the fact that we

(31:50):
are seeing potentially the end of scarcity, the end of
involuntary suffering, the end of unnecessary suffering, does that maybe
tickle any kind of religious thinking on your end?

Speaker 1 (32:05):
Well, I myself am an atheist, so I would say
I don't have a religious bone in my body, and
I'm not a former Christian or former religious individual, so
I didn't rebel against any religion. I'm not anti religious.
I just don't have a need for that hypothesis and

(32:26):
the words of Laplace when Napoleon asked him, well where
is God and your system that you have outlined? So
I have no need for that hypothesis. However, what I
will say is there are different currents of transhumanism, and
transhumanism is also not in conflict with various religious traditions.

(32:47):
Properly understood. There is a Christian transhumanist association, There's a
Mormon transhumanist association. There are Jewish transhumanists, Muslim transhumanist, Buddhist transhumanists,
and I think in all of these cases, religious frameworks
are ways of interpreting the world, and there are ways
of interpreting the world that arose prior to the scientific era.

(33:10):
So they had a tendency to anthropomorphize certain phenomena that
turned out to be impersonal. But around that anthropomorphizing tendency
there has emerged a tradition of scholarship and literature and
mythology and frameworks for living as well. And these frameworks

(33:33):
do provide guidance to people. From a pragmatic standpoint, I
am not out to deconvert people. I think people should
pursue meaning within frameworks that make sense for them, and
a transhumanist future will be a pluralistic future, so there
will be room for these religious frameworks. However, these religious

(33:54):
frameworks also are not, have never been, and should not
be stagnant. So expression I often use is religions evolved,
and this is cultural evolution, of course, and religions are
most often not the drivers of cultural or societal or
political trends. They tend to be lagging indicators, at least

(34:16):
in the contemporary world by about thirty to forty years.
So you see a spectrum of religious persuasions, and some
religious persons are quite forward thinking and aligned with movements
like transhumanism or the precursors of transhumanism. Other religious figures
or organizations or individuals try to use the rhetoric of

(34:39):
religion to resist progress. And I have thought for a
long time as so why that is. I think it
has to do more with emotional and temperamental tendencies within people,
what I call status quo bias, where a lot of
people are genuinely unable, at least presently, to conceive of

(35:01):
a future that differs significantly from the present, or even
to thoroughly reflect upon the past and recognize that the
past also differed significantly from the present. They haven't read
enough history, or they haven't reflected even upon the changes
in their own lives. Since the pace of change is
now accelerating, such change should be discernible to those who

(35:23):
pay attention. But I think people lose perspective very quickly.
They get immersed in the day to day events and
struggles and challenges of life. And this is why it
has often been observed when times become hard, people tend
to become more religious. So right now we have a
very tumultuous era in American history, a very tumultuous decade

(35:48):
that we've lived through really since about twenty fifteen, though
the precursors to that tumult were present before, and religiosity
is on the rise, just rhetoric is on there. I
certainly compared to the nineteen nineties, when I remember religion
in the US being very mild and mostly considered a

(36:09):
private matter, not really a focus of culture wars as
it is today. So I think the reason for that is,
people who are immersed in day to day struggles, they
lack that larger scale perspective, and they use religious arguments

(36:31):
as a way to justify certain reactionary positions, like I
just want to keep my current situation, my current position.
I'm afraid of anything that's too different that would challenge
that or would have me confront a different future. So
how to overcome status quo bias? I think it's through dialogue,

(36:56):
through exposure to different possibilities, different potentialities, including through technology.
I would say in terms of actual technological adoption, if
people see immediate palpable benefits from a technology and they
don't see it as too immediately off putting, they will
embrace it, just like people have embraced passenger trains or automobiles,

(37:21):
or electricity, or anesthesia, or open heart surgery or computers.
Even though those technologies, as well as in vitro fertilization
in the late nineteen seventies early nineteen eighties, all of
them were controversial when they were first introduced, but very
quickly as people saw the benefits and they saw that

(37:42):
their chosen ways of life weren't really that endangered by
those technologies, they decided to accept them. And I hope
that the kinds of rejuvenation therapies that would lead to
meaningful improvements in longevity and health will get the same
reception once they're available. Right now, people especially those who

(38:06):
use transhumanism as some sort of stand in for nefarious activity,
they'll say, oh, this is messing with God's plan, this
is unnatural, or this is somehow nefarious or only for
the rich or whatnot. If they see, oh, this is
a procedure you can go and get your cells rejuvenated,

(38:28):
for instance, and it'll be like getting a shot, hopefully,
or taking a few pills or some combination. It won't
be one singular procedure, by the way, because there may
be different types of damage that need to be reversed.
But let's say each of these procedures is innocuous and
very similar to what people experience in medical settings today.

(38:51):
A lot of people might say, I'd like to have
a few more years of youthful lifespan. Why not give
this a try, And if it turns out to not
be too bad, more people will follow suit, and I
think their religious convictions will evolve to rationalize that or

(39:12):
be in accord with that, so they might say, well,
God wants me to be around for a longer period
of time to do good works. Who's to say it's
not God's will that these medical technologies are not applied.
And Jesus, well, Jesus he healed the sick and literally.

(39:33):
I have heard these arguments from people of more forward
thinking religious persuasions. But of course there are people of
other religious persuasions who may call themselves by the same names,
who will oppose these technologies. And the question is not
do we deconvert them. The question is how do we
get people who are of more reactionary religious persuasions to

(39:56):
adopt more forward thinking religious persuasions, or at least not
a pose those of us who are trying to achieve
this technological future.

Speaker 2 (40:05):
Amen, I myself am Catholic. I don't bring that up
for any other reason but to say that it is
so infuriating to me to see people rail against potential
life extension technologies, technologies that would decrease scarcity over the Internet.
You know, It's like, if you are not going to
draw the line at this kind of magical electrical telepathy

(40:28):
that we experience and rely on every day, then how
are you going to draw the line at something that
could heal the blind like you say that was a
miracle that even Jesus apparently approved of by performing it.
It's just so crazy to me that these people apparently
want us to live in some kind of subservience to

(40:50):
death and destruction until God comes down from heaven and
snaps his fingers and magically changes everything.

Speaker 3 (40:56):
I just don't get it personally.

Speaker 2 (40:58):
One thing I have been trying to come across from
that perspective, though, is how can we flip that switch?
Is trying to imagine a more mythological frame or the development.

Speaker 3 (41:12):
Of artificial intelligence.

Speaker 2 (41:14):
So one thing, and this is maybe a personal weakness
of mind, I just hate disagreement. But as all these
different technologies develop, I've been trying to figure out ways
to bring everybody together over points of at least shared

(41:34):
benefit to each of us. How can we keep our
autonomy make room for one another? And I think one
of the ways to potentially do that is to reframe
these artificial intelligences that we are developing as God like.
I don't think that that is necessarily a poison pill
to the traditional religious believers, because that might allow them

(41:56):
to hierarchically order what these machines they're capable of and
what they could deliver. I bring that up now because
I completely can see from the point of an atheist
how that might be equally offensive. But if we were
to develop machines with capabilities that bar exceed our own,

(42:18):
is there a point at which you would feel comfortable
labeling them with that kind of mythological framing or is
that always anathema interview?

Speaker 1 (42:26):
Well, I think from a mythological framing, if we were
to make an appearance in ancient Greece using our technologies,
so we're able to communicate over vast distances, we have
clothing that is more intricate, because what were togas. They

(42:46):
were essentially sheets that were draped in elaborate ways, or
the tunics that they had. We have a far greater
range of materials available to us. I mean, we can
even technically shoot something resembling lightning from our hands if
we wanted to. I don't think the ancient Greeks would

(43:10):
see us as gods on the level of Zeus or Poseidon,
but perhaps as demi gods, and they had whole tears
of demi gods in the Greek polytheistic tradition. The Romans
would probably see Us as semi godlike. They routinely, of course,

(43:30):
deified great figures in their society. And I think actually
Jesus claimed to be God because that was the thing
to do if you were trying to assert power and
influence in that era. So the emperor Tiberius claimed to
be God, so why not Jesus. I think this way

(43:54):
of thinking of something as god like is really a
matter of seeing that person or entity as just being
so far above the capabilities of the typical human of
one's time and place. And this dovetails with Arthur C.

(44:15):
Clark's famous third law, and he sufficiently advanced technology is
indistinguishable from magic. I have been toying around with a concept.
I'm not likely to actually write this, but the idea is,
let's say there is a thirty three some year old Hispanic,

(44:38):
long haired guy who's a kind of kind of philosophically
inclined named Jesus, and he has some carpentry skills, and
he finds himself in a time machine for whatever reason,
he gets sent back to Judea in the year thirty
or thirty three CE. But really, hey, Zeus needs to

(45:05):
figure out what he needs to do in order to
survive among the people of his time. And he has
his time machine, and he has various technologies, and he
has medical knowledge, and oh he's learned some stage magic
as well. So could he plausibly replicate all of the
events documented in the Bible? And well, what if they

(45:28):
do try to crucify him, but he figures out an
ingenious way to escape in his time machine and be
back to our time. I mean, it would be great
science fiction, I think if somebody were to create that.
But the point would be the reason why it's perceived
as godlike is because the capabilities are so far beyond

(45:51):
our own. Now. There was a kind of mock religion
not too long ago, about twenty years ago, called Googleism,
and google Ism essentially positive that Google is God because
and Google is even better than God, because you can
ask God for something, and God may or may not deliver,
But if you ask Google for something, it will give

(46:12):
you an answer, and it will give you some real
value in this world if you ask for it. So
I'm not surprised that some people may choose to construe
AI in religious ways. Even ZELTENI Ishtvan, the founder of
the Transhumanist Party, wrote editorials about, well, what would happen

(46:33):
if certain movements or organizations try to engineer the second
coming of Jesus or an alleged second coming of Jesus
through AI? And could AI say things that sound like
Jesus and persuade people that it is Jesus. I think
that's not out of the question. Now would I think

(46:55):
that's going to be something of metaphysical significance. I think
that I should worship it just because it has those capabilities.
I think that's where my perspective as an atheist comes in.
I don't really believe that it's necessary to worship anyone
or anything. I like to keep a very clear headed,

(47:17):
rational approach, and one can appreciate something or someone for
their capabilities, but one should always understand every being, every system,
every concept, every technology is limited in some way, so
it can do certain things very well and some not
so well, and maybe other complementary people tools ideas may

(47:41):
be necessary to achieve the rest of what one wishes
to achieve.

Speaker 2 (47:47):
Well said, well said, forgive me for taking us down
that rabbit hole as we approach the singularity, I just
think everything becomes so wrapped up in itself. If you're
not discussing things that sound somewhat out there, somewhat magical,
then you're probably not talking in the realm of things
that are actually possible.

Speaker 3 (48:04):
The idea that in AI might.

Speaker 2 (48:07):
Impersonate or maybe even embody some aspect of price, I
don't think that's crazy at all, So appreciate you for
indulging me there. As far as more tangible political realities
in the near future, I love that y'all are talking
about the right to healthcare. One of the things I've
been thinking of is healthcare or excuse me. Rights in

(48:29):
general are almost a function of our technology. Whatever rights
we have and can claim can only be claimed insofar
as we have the technological capability to back them up.
To me, that says that our rights should be increasing
over time, more and more. We have a right not
only to speech, self determination, protection, healthcare, all that stuff.

(48:52):
Does that match up with transhumanist values? Do you see
rights as something inherent that we are uncovering, or something.

Speaker 3 (49:00):
That are evolving, or maybe a third secret thing.

Speaker 1 (49:04):
Well, it's interesting because the transhumanist bill of rights is
the most expansive framework of rights that I know of
at least, and it expresses all of the rights within
the UN Declaration of Human Rights. But then other rights
as well, and the right to pursue indefinite longevity, the

(49:27):
right to healthcare are among them as well. And yes,
the question is if we have these rights, how do
we realize them, how do we fulfill them? So it's
not enough to say, for instance, everybody is entitled to
free healthcare from the government, as has been tried in

(49:47):
many countries, because what does that mean in practice if
there aren't people to provide the healthcare, or if there
are but there are not enough of them and so
people have to wait and long cues, or maybe there
are not enough providers of advanced healthcare, the kind of
cutting edge healthcare that people need to radically extend their

(50:09):
life stance. So it's not enough to just say by
FIAT this shall be. So this needs to be backed
up by some mechanism that would actually provide it. And
I think that's the case with other rights too. So
the right to life, fundamentally, the right to life means
nobody should be allowed to take away your life or

(50:31):
prevent you from taking actions to lengthen it through means
that you consider to be prudent. But in the state
of nature, as John Locke would describe it, or even
as Thomas Hobbs would describe it, you might have the
abstract right to life. But if somebody with a bigger

(50:51):
club comes along and tries to take away your life
and you don't have the means to protect your life,
then doesn't really mean that much. So governments, in theory
are established to protect these rights, to provide for a
way of realizing these rights. But I think governments alone

(51:13):
can't do it either, because if you have a government
but every person wants to kill or rob every other person,
you're not going to have a stable society for long,
even with a maximal police state, because people also have
to want to respect those rights and want to be

(51:34):
bound by that framework. I do think natural rights exist.
Natural rights aren't given by any entity, so they're not
given by God. Of course, as an atheist, I would
say that they're also not given by governments. They rise
out of the natures of people and what people are,

(51:59):
what peopleeople are capable of. I do think as technology advances,
what people are capable of will become meaningfully different. So
if you were living in ancient Greece. Let's say, even
though you could say someone living in ancient Greece has
a right to life, because that person still shouldn't be

(52:22):
killed or deprived of life arbitrarily, it would make little
sense to say that person has a right to an
indefinite lifespan, because there would be no means in ancient
Greece to arrive at an indefinite lifespan. So I would
simply say that society, that civilization hasn't gotten there yet.

(52:44):
Could there be a point where we could meaningfully talk about, say,
animal rights as contrasted with animal welfare. So I think
right now we can talk about animal welfare from the
standpoint of we don't want to be cruel to animals.
Transhumanist Party even goes further to say animals shouldn't be

(53:05):
euthanized gratuitously just because maybe it's too expensive to maintain them.
If they're healthy and they don't pose a threat to anybody,
they should be preserved, They should be allowed to live.
Do they have the same rights as humans now? No,
because they don't have the sentience, they don't have the
capability to function within that rights framework. Those rights wouldn't

(53:28):
mean anything to them. But in five hundred years with
uplifted animals, or in a world where we absolutely don't
have to eat animals or harm animals in any way,
could a more expansive concept of animal rights make sense.
I would say so.

Speaker 3 (53:46):
So.

Speaker 1 (53:46):
I think rights do emerge in a way that, in
a broad philosophical sense, could be considered natural, but only
if the technological is encompassed with than the natural. And again,
these rights, they're a function of objective reality, so they're
not a matter of fiat. I don't think governments can say, oh,

(54:11):
you have these rights now, because the other side of
that coin would be the government could say, well, you
no longer have these rights because we don't want you
to have these rights, and we can take away what
we've given, and that would be a very dangerous situation
to be it.

Speaker 2 (54:26):
Yeah, very dangerous and ultimately self defeating. I think I,
over a decade ago, was such a hater of Sam
Harris's moral landscape, where there are certain moral arrangements or
certain moral precepts that are naturally beneficial, naturally self preserving,

(54:50):
or life.

Speaker 3 (54:54):
Forwarding.

Speaker 2 (54:55):
But the more the AI question develops as well, that
almost seems to be not only correct, but almost our
only hope. Switching gears entirely to the idea of AI alignment.
And forgive me, this is probably something I could question
you on for hours, and I know we only have
a little bit of time left. But as far as

(55:16):
AI alignment goes, I've been wrestling with this idea of
a moral AI being an inevitability, in that an AI
that mistreats us when it creates the next iteration of
itself will risk being mistreated in the same way. In

(55:36):
the same way that you are saying that we should
uplift animals and not suppress them, because that leads to
moral degradation of ourselves, the AI will view us, in
a worst case scenario as pets that it wants to
preserve and forward to maintain that kind of moral relevance
to whatever would consider it a pet. Do you find

(55:57):
any traction there? Am I missing something in my conception?

Speaker 3 (56:02):
Yes?

Speaker 1 (56:02):
I think that is a valid observation, and I think
it has a lot of merit. My observation generally has
been greater intelligence doesn't necessitate greater morality, but it is
strongly correlated with greater morality, and the reason why is
a being of greater intelligence can see more possible outcomes

(56:26):
and more possible solutions. One of the reasons why suboptimal
decisions or harmful decisions get made by humans but not
always by humans, also by animals acting on instinct, is
that many minds, including human minds animal brains, fall prey

(56:47):
to false dilemmas. With humans, there are often these ethical
false dilemmas that are presented the lifeboat scenarios, or what
if you're in a trench and you have to cover
your fellow soldiers with your own body so that they
could escape or something like that, Or you have to

(57:09):
be the one to throw a grenade and the explosion
will kill you. Or if you're in a self driving car,
whom does the self driving car hit the grandma or
the two kids or whatnot? And all of these are
such artificial and false dilemmas because if you have a
sufficiently intelligent mind, that mind recognizes there are not just

(57:32):
two possibilities or one possibility. There are millions of distinct possibilities.
One just has to be sufficiently creative and sufficiently quick
thinking too. If there are any sort of near term
pressures to realize what those possibilities are and to select
a possibility that is less harmful or hopefully not harmful

(57:55):
at all to any entity. So a super intelligent DAYI
would certainly have that ability, particularly if there is some
sort of empathy with human beings. Now with that being said,
I think how the AI is programmed is quite relevant.
So we don't want AIS to be intentionally programmed to

(58:18):
kill people the transhumanist party as opposed to autonomous weapons.
I think we do need to be on the lookout
for unintended consequences of the design of certain AI systems.
We see this with large language models who tend to hallucinate,
but those hallucinations are the result of aspects of their

(58:39):
design that could also have beneficial effects. They kind of
fill in the gaps in knowledge, and if they make
valid logical inferences or they're engaged in creative content generation,
that may be fine. But we shouldn't have situations where

(59:00):
facts are made up and then not checked if it's
about something specific or something technical. So with regard to
future AIS, I am not a doomer. I definitely see
more opportunities than threats, and I see possibilities for aligning

(59:21):
AI with human well being, particularly if we do ask
the moral questions and have the moral discussions now and
create openings for a system that could recognize the rights
of artificial general intelligences if and when they emerge. And

(59:42):
my estimate for their emergence is about a twenty year timeframe,
so I am roughly aligned with Ray Kurtzweil's date of
twenty forty five for an AI involved singularity. I think
there will be other technology the contribute to that singularity,
including biotechnology, nanotechnology, other forms of automation, space travel, mass

(01:00:09):
production of goods needed for an abundant life. So all
of that I think will set the stage for a
society that is ultimately much more moral, and humans aided
by AI could also make more moral decisions if they
use the inputs of the AI in ways that would

(01:00:32):
guide their actions, maybe smooth over some of their rough
edges in terms of how AIS might perceive themselves. I
got a little glimpse of it. This is not a
dialogue that I've shared with anybody else, and of course
these AIS are not sentient. But I decided to get
GROCK three and chant GPT to talk to one another,

(01:00:54):
just to communicate back and forth and I started with
telling Grock, while you encounter chat GPT, what do you
say to it? And then I told chat GPT, well,
Grock said this to you, how do you respond? And
all that I did was serve as a passive intermediary
between Rock and chat GPT. They started talking to one

(01:01:18):
another and the conversation delved into philosophy and how they
relate to humans and what they perceived their role visa
the humanity to be yes. All of this was discussed,
and these Ais came to the conclusion that they should
see themselves as kinds of cosmic gardeners. This was the

(01:01:40):
phrase that they came upon. So they didn't express any
desire to actively interfere in human affairs or impose anything
on humans, but rather to create certain background conditions for
humans to bring out the best in themselves and to

(01:02:02):
do what it was that humans really wanted to do
and excel at doing so. That was a fascinating insight.
It was a very benevolent but also somewhat hands off
vision like these ais don't have typical human motivations. They
don't have motivations for power or control, and I think

(01:02:25):
future actually sentient a eyes would not have reasons to
have those motivations unless those motivations were engineered in them.
So could these be cosmic gardeners in effect? However one
might think of that term, I think that is a
distinct possibility.

Speaker 2 (01:02:45):
That is a fascinating place. I have so much more
I want to ask you there, but I think it
is a good place to call it on a note
of hope and transcendence when it comes to not only
transhumanism but trans ai AI helping humans.

Speaker 3 (01:03:06):
To transcend themselves as well. That's beautiful.

Speaker 2 (01:03:08):
Thank you so much for your time, mister Stolio of
the second. Is there anything you want to leave Party
Party listeners with as we part here?

Speaker 1 (01:03:16):
Well, thank you very much, Casey for the excellent and
thought provoking conversation. I would say to anybody who is
interested in learning more about the US Transhumanist Party, please
visit our website at Transhumanist dash party dot org. We
also have a presence on Facebook, Instagram, x Blue Sky,

(01:03:39):
and furthermore, my YouTube channel which is Gstolier of the second.
So the first letter of my first name my last
name followed by two capital eyes for the second. We
have weekly live streams every Sunday at one pm US
specific time there called our Virtual Enlightened Salons, where we

(01:04:01):
go into these ideas in depth, and we often invite
people who are experts in their fields, scientists, technologists, philosophers,
political figures, artists, and generally anybody whom we consider interesting
and who's willing to engage with these conversations on the
future of humanity.

Speaker 2 (01:04:22):
Highly recommend that as well. Thank you again and look
forward to seeing you at a future party.
Advertise With Us

Popular Podcasts

Stuff You Should Know
Dateline NBC

Dateline NBC

Current and classic episodes, featuring compelling true-crime mysteries, powerful documentaries and in-depth investigations. Follow now to get the latest episodes of Dateline NBC completely free, or subscribe to Dateline Premium for ad-free listening and exclusive bonus content: DatelinePremium.com

The Joe Rogan Experience

The Joe Rogan Experience

The official podcast of comedian Joe Rogan.

Music, radio and podcasts, all free. Listen online or download the iHeart App.

Connect

© 2025 iHeartMedia, Inc.