Episode Transcript
Available transcripts are automatically generated. Complete accuracy is not guaranteed.
Speaker 1 (00:00):
Greetings and welcome to the United States Transhumanist Party Virtual
Enlightenment Salon. My name is Jannati stolier Off the second
and I am the Chairman of the US Transhumanist Party.
Here we hold conversations with some of the world's leading
thinkers in longevity, science, technology, philosophy, and politics. Like the
(00:21):
philosophers of the Age of Enlightenment, we aim to connect
every field of human endeavor and arrive at new insights
to achieve longer lives, greater rationality, and the progress of
our civilization.
Speaker 2 (00:37):
Greetings, ladies and gentlemen, and welcome to our US Transhumanist
Party Virtual Enlightenment Salon. Today is Sunday, September twenty one,
twenty twenty five, and we have a fascinating conversation in
store for you on euro transhumanism and what it can
teach us about the philosophy of transhumanism in general, as
(00:58):
well as some applications to American politics, because here in
the United States we are experiencing one of the most
turbulent political periods in our history, certainly the most turbulent
political period in my lifetime thus far, and we could
benefit from any insights that we can get about how
(01:21):
to improve this situation. So joining us today is our
panel of distinguished US Transhumanist Party officers, advisors and members,
including our Director of Visual Art Art Tremone Garcia, our
technology advisor and foreign Ambassador in Spain, doctor Jose Cordero,
and our friend David Wood, who is the founder of
(01:43):
the London Futurists and the executive director of LVF, the
Longevity Escape Velocity Foundation. Our special guest today is doctor
Stefan Lorenz Zordner. He is a world leading philosopher recognized
for his work in posthumanism, transhumanism, Nietzsche, and the philosophy
(02:04):
of music. He is especially known for his interpretations of
Friedrich Nietzsche and his critical engagement with human enhancement and
emerging technologies. He is a philosophy professor at John Cabot
University in Rome, director and co founder of the Beyond
Humanism Network, and a fellow and advisor of numerous organizations,
(02:24):
as well as editor in chief and founding editor of
the Journal of posthuman Studies, and a prolific public speaker.
Essentially every few days I see a new announcement about
where Stefan is speaking, and he travels all over the world,
delivering fascinating presentations of which this will hopefully be one.
(02:47):
He is an author as well. He has published numerous
books on philosophy, including Metaphysics Without Truth in two thousand
and seven, on Transhumanism in twenty twenty, We have Always
Been Cyborgs and the Philosophy of Posthuman Art, both of
those in twenty twenty two, and here is a link
(03:11):
to We Have Always Been Cyborgs. You can also find
a link to pre order his coming book, Eurotranshumanism, which
will be published in May of twenty twenty six. Here's
the link to that. You can also learn more about
him on his website at www dot Sorgner dot dee,
(03:33):
and you can read about him on THHPDIA the Transhumanist Encyclopedia.
Perhaps we will update this entry after the salon to
embed the recording there as well. So Stefan, welcome to
our Virtual Enlightenment Salon, and please tell us your vision
of what Eurotranshumanism is, how it differs from what you
(03:58):
call classical humanism, and what lessons we can derive from
it in regard to the turbulent situation we face today.
Speaker 3 (04:09):
Many thanks, but that extremely kind introduction. It's an enormous
pleasure and honor being here with you and in that
extremely distinguished group of scholars, intellectuals, experts who are on
the panel with me. Yeah, David would I've met him already,
Like I think I spoke at the Lon Futurists for
(04:30):
the first time. It was must have been fifteen years ago,
so we've really gone back a long period of time.
So yeah, Urtanes humanistness. The initial basically started with the
publication of my article on Nietzsche, The over Human and Transhumanism,
which came out in two thousand and nine, and I
will today I will present basically europe transhumanism in a nutshell,
(04:54):
also distinguishing and orientating, basically putting forward a map of
transhumani and placing eurotranshumanist part of that matter. So eurotranshumanism
represents a glocal variation of transhumanism rooted in the continental
European cultural tradition. Transhumanism is a cultural movement that it
firms using technologies to significantly move beyond our personal limitations,
(05:19):
whereby the likelihood of the coming about of the posthuman
gets increased to promote the quality of our lives. I
think this is sort of in a nutshell. My understanding
of transhumanism and neurotranshumanism cross on philosophic and scientific local
paradigms dominant in the cultural tradition. The term was pointed
in twenty twenty two. It represents an alternative to the
(05:43):
variations of transhumanism, in particularly all distinguished in relate differences
to classic transhumanism, the most dominant form of transhumanism, which
emerged from the Anglo American cultural tradition and whose most
famous living proponents include Nick Bostrum, Ilin Musk and Feter Teel.
(06:04):
I have it, there's a different version, Maxmore, Natasha Vita more.
These original transhumism. I will talk briefly mention them as well,
but because of some of the political impact of classic transhumanists,
I will particularly focus on the distinction and distinguish eurotranshumanism
from classic transhumanism. Transhumanism the most paradigm shifting idea in
(06:27):
the worlds because it undermines the dominant world view of
the past twenty five hundred years. It takes evolution seriously,
as aware that the human beings came about three hundred
thousand years ago due to a genetic mutation, and it's
conscious that in three hundred thousand years we will have
died or developed further. Hence, it stresses the need to
take responsibility for our lives and give form to our
(06:49):
existences by means of emerging technologies. This is a fundamental
self understanding of transhumanisms. We need to significantly move beyond
our purrent boundaries such that the probability of the coming
about of the posthuman increases, as thereby we can increase
the quality of our lives, where by a particular focus
is on increasing the health bands, which is that is
(07:11):
what is generally shared among transhumanists history, the word transhumanism
can be praised to the divine comedy by Dante, as
he coined the term transumanna Natasha vita more sort of
deld the great job finding that out. However, it referred
to a development in the afterworld then, and the first
intellectual to connect the term transhumanism to an imminent development
(07:33):
was Julian Haxley in nineteen fifty one, not nineteen fifty seven.
Already in nineteen fifty one he coined the turn and
published it in an article, who thereby created the foundational
version of transhumanism. He also employed the term revolutionary humanism.
Julian Hutsi was the first General director of the AESCA,
was involved in structuring the Universal Country of Human Rights
(07:55):
and was also president of the Tjigit Eugenic Socility. And
family was deeply into angled with the development of the
theory of evolution. His father was on the sixteen he
worlde the famous anti transhumanist Nobel Brave New World. His
cran father was Dobbin's pulled out from miss Andriy Haxley
his thief Free There was a Nobel Prize winner Henry Fielding.
Haxley did several varieties of transhumanism. Hasley developed the intellectual
(08:20):
foundations of transhumanism. It slowly became a central cultural movement
since the beginning of the seventies. However, there has never
been just one transhumanism. Different stremes need to be distinguished,
where by the following scree approaches our particular relevance. There's
original transhumanism and in the eighties and nineties twenty thirty
(08:41):
MaTx More and Eaten More turned transhumanism into a broader
cultural movement. The texts are You a transhuman Monitoring and
Stimulating your personal rate of Crows in a Rapidly Changing
World by FM twenty thirty from nineteen eighty nine, and
Transhumanism Towards the Futurist USh by Maximore from nineteen ninety a.
(09:02):
Foundational sources for contemporary transhumanism. Natata Vipemore, who already erodeed
Transhumanist manifest in nineteen eighty three, has been the central
female wise promoting transhumanism for more than four decades. Much more,
is the leading thinker of original transhumanism, and he is
critical of perfectionism, utilitarianism and utopian from subvertuo ethical approaching
(09:24):
politically supports libertarians. Secondly, there is classic transhumanism. In nineteen
ninety eight, Nick Bostrom and David Pierce founded the World
Transhumanist Assocation. From twentusand and five until twenty twenty four,
Bostrom was the directory of the Future of Humanity Institute
at the University of Exford, which was largely responsible for
(09:45):
developing classic transhumanism, the philosophical approach, which defends utilitarianism, perfectionism,
and the relevance of utopias. These ideas have had a
strong impact on Silicon Valley entrepreneurs like Elon Musk, who
donated a million British pounds to the of HI in
twenty fifteen, contrite the blurb to Bostrom's books Superintelligence and
(10:06):
popularize many of Bostrom's ideas, like the simulation hypothesis. Then
there's eurotranshumanism. It crystallized that the latest with the publication
of my article Nietzsche the over human and Transhumanism in
the Journal of Evolution and Technology in two thousand nine,
which unfolded an enormous number of reactions and connected transhumanism
with reflections from continental philosophical traditions. It braws on the
(10:29):
philosophical reflection from Heraclitis Spinoza, Gertie Wagner and Nietzsche. It's
anti libertarian, anti utilitarian, antiflfectionalists in anti utopian and highlights
the relevance of incorporating emerging technologies so that a great
diversity of different longest styles can be realized. The reception
of transhumanism in twenty first century. There are four different
(10:52):
stages in the popular reception of transhumanism, which can be distinguished.
In particular, in two thousand and one, Urgen Habermas probably
at least one of the most important living philosophers decided
the movement. Derided the movement for its freaked out intellectuals
and self styled venetians, with all two familiar motives of
(11:14):
the very German ideology. Thus the intellectual relevance was highlighted. Secondly,
in two thousand and four, Franciscukayama referred to transhumanism as
the most dangerous idea in the world in the Foreign
Policy magazine, where by a wider intellectual engagement with transhumanism
was realized. Then Thirdly, in twenty thirteen, Dan Brown published
(11:36):
his international bestselling novel Inferno, with which a public reception
of transhumanism has been initialized. In July fourthley, in July
twenty twenty four, Elon Musk created the America pac Political
Action Committee to support the political campaign of Donald Trump.
The organization was backed by many other billionaires, venture capitalist entrepreneurs,
(11:59):
and infam companies like Palenteer. Politics of transhumanism today, the
final stage reveals a political relevance of transhumanism today, which
is extremely complex. There are several antagonistic attitudes towards transhumanism
solely within the US American government. Personally, there's a strong
(12:20):
pro transhumanism strain within the Trump administration. The co founder
of Pallentier, Peter Teel, was a major donor of Van's
twenty twenty two Senate paign in Ohio, and Wentz also
worked for Teal's venture capital firm Mitel Capital. In twenty
twenty one. TiAl was also responsible for realizing a meeting
between Vans and Trump which made him become Strump's running mate.
(12:43):
Peal and Musk had led separate companies which later merged
to Papal, both in praise versions of transhumanism. While Musk
is particularly concerned with existential risk, Peal is particularly interested
in increasing our health spans, which is the reason why
he has donated si difficant sums to the Sense Research
Foundation and out pros from the Midtudala Foundation. Tim O'Neil,
(13:07):
while running the Teal Foundation, po founded the Teal Fellowship,
which had already had an enormous economic impact as it
supported Austin Russell, wittelig Butine, and Luciguau, among many other
distinguished recipients. Tim O'Neil also served as CEO of the
Song's Research Foundation from twenty nineteen until twenty twenty one,
(13:30):
and he was sworn in by Kennedy on the ninth
of June twenty twenty five as the United States Deputy
Secretary of Health and Human Services in the Trump administration. Secondly,
there's a strong anti transhumanism strain in the Trump administration too.
Former Trump advisor Steve Bennon wrote the Prefers to the
(13:52):
anti transhumanist bok doc aon published in twenty twenty three,
and recently highlighted that I quote Elon Musk is an
evil person in the preface unquote. In the preface, the
form of White House chief strategies decreased with the substance
of Fukuyama's criticism goes much further to stress that quote
the practice is far worse unquote than the idea of transhumanism.
(14:15):
According to him, transhumanist developments are in the interest of
the world's elite overlord's unquote, and the quote Global Institute
of the Finance, Wall Street and Davos are behind his
latest aberration of humanity unquote. Thus, Bannon's criticism also contains
antisemitic connotations. Bannon is still a significant voice within Trump's
(14:37):
MAGA movement and a key informal advisor of Trum's Inner Sincle. Thirdly,
there's a global and anti transhumanist coalition which connects the
USA with Russia in Brazil. In Russia, the leading protagonist
are Alexander Dugan and the patriarch of the Russian Orthodox
Church kill the First who who have participated in anti
(14:59):
trends humanist events. Dugan explicitly calls transhumanism an idea of
the devil. According to him, pluralism, the Lgbtqia Movement plus Movement,
and all the other evils of the world are supposed
to be manifestations of transhumanism, which he portrays as as
(15:21):
an antagonist of orthodox truth. Bannon and Dugan met personally
in Rome in twenty eighteen to discuss philosophical issues related
to pradictionalism, an approach which rejects modernity and its ideals.
This outlook is also shared by Lavo de Cavallo, the
political muru of the former president of Brazil, Yeah Boltsonarre.
(15:46):
So we see the connection between concerning traditionally with Russia,
United States and Brazil falsely. It's interesting to also note
that quote Maria Moronsova, the eldest daughter of Russian leader
Putin is now publicly positioned as a major scientist with
a world class laboratory after winning a state funded grant
(16:07):
from the Russian Science Foundation. The plan supports her latest
projects regulation of cell renewed processes in the body. The
fundamental basis for long term preservation of functional activity of organs,
anitious human health and active longevity. In effect, its focus
is anti aging. Prolonging the health span is a central
(16:28):
goal which is being upheld and transhumanism too. The relevance
of longevity is highlighted by both by Putin as well
as Jijingping in September twenty twenty five when the Russian
leader was caught musing about immortality with Chijingping. But his
fascination with long life is nothing new unquote. The Guardian
(16:48):
stressed future transhumanism of untrusion and anti aging research. The
most florishing area of transhumanism at the moment is anti
aging research, Undoing age and increasing the healthspilm as a
central goal within transhumanism. The global player in this context
is Vitally Putarin, who founded the second biggest scriptocurrency, Ethereum,
(17:10):
launched in twenty fifteen. I actually met him. We had
a long discussion about that a week before he actually
launched the theory. He received a Teel Fellowship in twenty
fourteen which enabled him to do so, and he launched
the pop up city of Susala in Montenegro in twenty
twenty three, which has played a significant role in the
development of Italia. A decentralized city focused on longevity and
(17:33):
biotech innovation located in the Prospera Special Economic Zone Ratan, Honduras,
where several longevity events have been hosted. A leading gerontologist,
Auberty Gay, who was analyzed that aging is a disease,
has been involved in events in Ratan in twenty twenty four.
In August twenty twenty one, Degree was removed from his
position as Chief Science Officer as the Scenes Research Foundation
(17:56):
following allegations of sexual harassment when in Alsto counts as
an efflential fact facilitator within the local innovation ecosystems of
Chang Mai in Thailand and the further Hub for longevity research,
workshops on topics including defensive, accelerated decentralization, stable coins, AHI
and biotechnology are being organized in interactive pop up communities
(18:19):
like h City, Lana which took place in Chang Mai
in the fall of twenty twenty four, and there's a
global trend of crypto communities to support anti aging research,
not to your transhumanism. The roots of eurotranshumanism can be
traced to the two thousand nine publication of my article
Niatzsure the over human and Transhumanism Poor elements of eurotranshumanism
(18:40):
both further developed and in my monographs on Transhumanism twenty
twenty We've Always Been Cyborgs twenty twenty two and Philosophy
of post Human Art twenty twenty two. Classic transhumanism, as
it culturally, economically, and politically most dominant form of transhumanism,
belongs to the Anglo American culture and draws upon the
dominant sickle philosophical and scientific paradigms and operant in this tradition.
(19:05):
Euro Transhumanism, on the other hand, is rooted in continental
European cultural tradition, while also including central elements of transhumanism
common to classic transhumanism. What's the difference between classic transhumanism
and euro transhumanism. Classic transhumanism and eurotranshumanism differs significantly in
(19:25):
their approach to central philosophical issues well, classic transhumanism upholds
several versions of utopias, like long termission. Euro transhumanism rejects
the affirmation of any utopia due to its potanalistic and
potentially to utilitarian implications. It still upholds sort of the
relevance of visions based on scientific insights, but not like
(19:49):
a perfect future state in utopia, which justifies any kind
of other actions technically, while classic transhumanism affirms the hyper
anaisance ideal of action, and Bostrom explicitly states that in
an article, you're transhumanism in praises a radical plurality of
concepts of the good life, whereby the right of morphological
(20:10):
freedom becomes central. Thirdly, while classic transhumanism presents an optimistic
understanding of the world things can be solved, you're transhumers
not tears to a more Bodhist understanding of Buktah. You're
to us having one's desires and longings, which informs philosophical
pessimism such that sort of, you know, suffering is a
(20:33):
challenge and it's it's not a realistic option to completely
get rid of suffering. We need to sort of deal
with it. But that's sort of a condiciomana forcedly, while
classic transhumanism claims that certain philosophical insights, as well as
specific values and norms are objectively and universally valid. Enlightenment tradition,
(20:59):
usually utilitarian inside euro transhumanism regards philosophical perspectivism as the
most plausible way to interpret philosophical and ethical judgments. That's
that's based on a hermin utic understanding, so it's really
rooted in the tradition. Vatsimogada Mahidiger fifthly well, classic transhumanism
(21:21):
affirms a version of ethical utilitarianism. Eurotranshumism favors mutic ethics
that is more considered of cultural culture cultural contexts. In
the end, the norms and and values, they are fictions.
They human made fictions, and therefore they can be altered easily.
They are still rooted in a certain They're rooted in
(21:42):
a certain geographical cultural location. And that's why I also
stress the notion of euro transhuman euro transhumanism does not
affirm like eurocentrism, European something the European context is glutbally valid.
But I'm merely I'm merely revealing my own cultural context.
(22:03):
I'm highlighting, highlighting sort of my situatedness in the discourse,
and thereby I'm entering, I'm entering the discourse with other approaches,
not taking the stands at mine and the foundational supremacy
that it has an ultimate foundation and ultimate validity. Sixthly,
(22:26):
classic transumers and authors for a libertarian political system, while
euro transhumersn't regard social liberal democracies as the best method
of promoting human flourishing. And this aspect is particularly relevant
when it comes to the universal health care system. And
that's what I see as one of the biggest, you know,
one of the beginning one central challenge in particular to
(22:48):
the for the United States, in particular to the relevance
of an increasing human health span. You know, thirty million
people are uninsured. The average life expectancy the fewerly years
below the life expectancy of Europeans living in the European
Union countries. And despite the fact that the United States
(23:09):
definitely has as the most evolved like medical system, people
are still dying significantly earlier. Seventhly, classic transumist names for
the realization of immortality, while eurotranshumanism identifies an increase of
the health span as the best way to promote the
quality of human lives. I don't want to use the
(23:30):
immortality discourse. It has too many religious implications and in
a literal sense, it cannot be used in a meaningful manner.
While classics at the eights point, while classic transhumanism claims
that mind up loading will be possible in a few decades,
they focus on a silicon based transhumanism. Euro transhumanism stress
that we've always been cyborgs, which, due to us being
(23:52):
psychophysiological entities, renders it highly unlikely that the singularity is near.
It's more representative of a Carbinet based transhumanism. I'm not
excluding the possibility that mind uploading will eventually be possible,
but sort of, you know, not in the force forthcoming
decades in the near future. Ninth point. While classic transumism
(24:15):
stresses the risk associated with superintelligence, neurotransumism regards it as
highly unlikely that an artificial intelligence will put humans into
sue in the near future. I would even go further
and say, sort of concepts like AGI and superintelligence they
really should be sold, you know, for the time being.
(24:37):
In the science fiction selection, you really are confronted with
weak versions of artificial intelligence and despite sort of the
exponential prose. I don't see that to change in the
near future. Hence point to overcome suffering should be our
primary goal. According to classic transhumanism, euro transhumanism recognizes the
(25:00):
of suffering and hand stresses that we should focus on
realistic goals by trying to successfully deal with some of
our most central challenges, by realizing that this is as
good as it gets. So whatever the solution, I'm very pragmatic.
I'm trying, you know, aim for something which is as
good as it gets, even if some of the goals
(25:21):
are really challenging, but that's that's, you know, that's the
best we can have we can aim for. Eleven's point,
plastic transhumanism sees transhumanism in the line in line with
the Enlightenment. Euro transhumanism, on the other hand, stresses a
more ambiguous relationship to the philosophical foundations of the Enlightenment
because and one of the reasons for that is actually
(25:43):
because the Enlightened, the dominant Enlightenment tradition, you know, can
be traced as they cart and they they upheld like
a dualistic understanding of humanity. We're still the body being
material and the mind being immaterial and that is a
highly problematic notion, which which which is being doubted by
eurotranshumanist approaches. And the twelfth point is, well, euro transhumanism
(26:06):
sees Nietzsche as an ancestor of transhumanism. Classic transhumanism denice
it's relevant and we can see that and a lot
of you know, Bosstrom's history of transhumanism gives a clear
example of that approach. So this is the summary of
like the twelve important points where I I sort of
really differ and highlight the differences between the eurotranshumanist approach
(26:29):
and the classic transhumanist approach. Thereby I did not deal
actually with the foundational transhumanists or the original transhumanist approach,
and particularly with the original transhumanist approach by you know
what what Max Moore and Natasha Vita More have been suggesting.
Actually they have many many correlations and many many red
(26:50):
strong resonances between them. And you know, Maximoll was strongly
influenced by Nietzsche as well as I. So yes, so
far this was like in a nutshell, what euro transhumanism
is all about. And now I'm looking forward to discussing
these issues with you further.
Speaker 2 (27:07):
Yes, well, thank you very much Stefan for that overview.
I think several of our viewers remarked that this was
kind of a rapid fire introduction to eurotranshumanism, and there
is so much to discuss there. And I will first
ask a question from Mike Lesine, because this is more
(27:32):
of a question that I think it would be good
to answer for the understanding of our audience, because he
uses a kind of colloquial meaning of the term metaphysics
that in his view is involved with the supernatural. But
that's not really what we're talking about when we discuss metaphysics.
(27:53):
Metaphysics is the study of, essentially the nature of existence.
SoC Ontology is a category of metaphysics, so really it's
a philosophical discipline that asks what exists at the most
fundamental level. But how would you respond to this question
(28:14):
from my Lausine?
Speaker 3 (28:17):
I don't think I used it term metaphysics, sort of metaphysics.
Maybe it was only mentioned and when I when when
you read sort of the title of my first books,
Metaphysics without CRUs Otherwise you know I wouldn't I wouldn't
use the term metaphysics either, And but you know, metaphysics
is so within the debate there there are different uses
of the term metaphysics, and and one originally well went
(28:40):
back to to to Aristotle, and basically the origin was
simply metaphysics contained the reflections out of the fundamental racinity
of the world. And the reason was was just that
the book was placed behind the in the library, behind
the book which was which was entitled physics. Then eventually
it turned into the study of the you know, fundamental
(29:02):
structures of reality. And and then in certain traditions metaphysics
think take on the understanding that it was an affirmation
of a dualistic ontology, like that there's something material and
that there is something immaterial. And and I'm very critical
of such an understanding and other understanding some might be
(29:23):
using sort of just metaphysics as a term to refer
to ontologies in general. Ontologies could be a materialist ontology,
could be a naturalist ontology. So that is another use
of the term metaphysics. So metaphysics really has a has
an enormous diversity of different notions. And I'm very grateful
that that you sort of mentioned and picked up upon
(29:45):
the term and of metaphysics. It's I think it's so
I agree with you sort of the supernatural, and all
of this is I'm also extremely critical of and I'm
doubtful of. And this is what I'm you know, this
is one of the central movements away from such a
dualistic supernatural understanding of the world. And yeah, thanks a
(30:06):
lot for allowing me to state that highlight this insight.
Speaker 2 (30:12):
Yes, thank you very much for your answer, Stefan. And
now we have David Wood who would like to offer
a few points in response to your overview, So David
please proceed.
Speaker 4 (30:25):
Well, many thanks, And I fully agree that there is
a rich diversity within this transhumanist family, and that's important.
We should not all be pigeonholed into a single description.
We are stronger because we have people who are for example,
some are more pro market, some are more pro social
contract some are more open to collaboration with religious organizations.
(30:50):
Some are very suspicious and fearful of collaboration with religions,
Some are hostile to any regulations, some see the value
of regulations, and so when it goes and so I
don't actually think that there is a threefold split as
you have described it. There may be many more splits
to that. Then we shouldn't try and pitcheonhole people even
(31:11):
into three categories. If I look at the ones you
listed as paragons of classical transhumanism, Nick Bostrom, David Pierce,
Peter Thiel, Elon Musk, maybe Jim o'nial, they have great
variety amongst them. Elon Musk isn't interested in life extension,
so he says. David Pierce says, very interested in paradise engineering,
(31:33):
which will get rid of suffering throughout the animal kingdom.
That isn't stressed so much by the other people you
have listed. Nick Bostrom, I think defies any simple characterization.
He's by no means a utilitarian. I think he probably
comes close to virtue ethics as well. So I would say,
rather than just trying to have an argument about which
can't do people fall into? Is it classical or is
(31:55):
it a European or is it original? I think we
should have twelve separate discussions on the twelve separate points,
because some of them I'm going to agree with you,
and some of them I'm going to strongly disagree with you.
I strongly disagree with you about he said there's a
choice between immortality and lifestyle. Well, there's many more to
it than that there is the possibility of radical life extension,
(32:17):
which goes far beyond the simple changes in lifestyle. It
is a radical change in our biology so that we
can live much longer, like the vision of UB degree.
I also strongly disagree with your assessment of AI and AGI.
I don't think AGI is science fiction. I see it
as just within our grasp. And AI has improved astonishingly
(32:38):
in the last few years, and as time and again
even the leading designers in that field, by surprise, people
have been wondering when is AI going to be good
enough to solve the mathematical Olympiad questions at the gold level.
And people were predicting, well, it's at least ten years away,
or it's at least eight years away, And one year
ago they said it's at least two years away. And
(32:59):
now this year two different groups have solved at that
gold level for brand new mass So there is a
great deal of propose. Now, I just offering this and
saying we should probably have lots of conversations rather than
just trying to say, Oh, Gnadi, are you europe being
transhumanist or an original transhumanist or a classic I think
you're a Ganadi transhumanist, and likewise, I don't want to
(33:22):
accept either of the labels.
Speaker 3 (33:27):
Yeah, thanks a lot. So you know, there's there's much more. Actually,
we'll probably three on on on many what what you
just just mentioned as well, because now, of course I
do look pretty you know, there are many significant differences,
but you know between David Pierce, Nick Bostrom, and so on,
which I label in one specific group. However, and this
(33:50):
is sort of this is one of the reasons why
I thought it was really important to highlight to highlight
certain categories in particular within the accadety sort of even
not within the academic discourse, but also in the public
(34:12):
trans you oh, I'm frozen out. No, I'm fine. So
transhumanism is very often portrayed as as a monolith. Basically
all all transhumanists want to become immortal by uploading themselves
to a personality and on the way they want to
become Superman on Viadra or wonder Woman in Botox. And
that's like, you know, that's that's like the level of
(34:34):
intellectual engagement with transhumanism within the public media. And I
just this is such a cricature which really has you know,
nothing much to do with what Mills transhumanists suggests, so
I said, so that's why I'm highly going. Well, actually,
many of the points which are being highlighted concerning transhumanism
in in in the public media reception, they do correspond
(35:00):
to classic what I call trans classic transhumers. And sort
of the influence of Nick Bostrom in particular too, relevance
on Ill and Musk and and sort of in the
Silicon Valley was extremely strong. And that's why I'm sort
of stressing now many of the points and and and
he could in particular, he holds up many philosophical positions
I strongly disagree with. On the other hand, when I
(35:21):
see some of the movements and also and he is,
he would clearly classify me as a utilitarian. And on
the other hand, if I see people like you know,
Max more An attached to Vita more they are many
more things I have in common with them. On the
other hand, sort of my crowning is sort of the
crowding and quite a few people also in Europe is
very different by resonating more with European continental tradition. On
(35:44):
the other hand, there also points I disagree. You know,
Max Moore is clearly more libertarian than I am when
it comes to when it comes to something too, is
your particularly agree you know there, of course we always
need to focus on the individual thinker, but it actually
is quite helpful also for the communication purposes to stress, well,
(36:05):
what you describe in the media as transhumanism that might
apply to a specific strain, and that there are many
other other movements which actually completely disagree with that understanding.
Two of the issues you pointed out were sort of
one was on immortality and the other was on artificial intelligence. Yeah,
concerning immortality, really sort of immortality in the literal sense
(36:29):
can meet usually means two things, that it's impossible to die,
or that you do not have to die, but do
not have to die, meaning actually you know that that's
a long time, you know, and and and and and
you know and and and that also takes into conspiration.
You know, eventually the world, given that it's it's naturalistic,
(36:49):
it's a dynamic world, then you know it has the beginning.
Now we're we're we're moving, we're expanding. Eventually the world
will the universe will either come to a standstill or
it will come to contract, and it will lead to
a cosmological singularity. Immortality means we would have to survive
all of that. And I think this is someone this
(37:10):
is not. You cannot, No one can. Seriously, I puild
such a situation. I mean, and and this is why
I said no immortality in the literal sense. Don't use
that word that. And this is when when like in
the academia, or not only in the kademia, like the response, oh,
it's just another religion, and I want to say, no,
it has nothing to do with the religion. There are
no prayers, there no rituals, there have no change. And
(37:32):
so that's one of the reasons I want to clearly
disentangle transhumanism from that kind of accusations. But I said,
I didn't see it's just honestyle. I do say, actually, no,
it is the health span in three things, a health
span that is the fundamental goal also a prend of eurotranshumanism,
and I do draw upon studies. It was a wonderful
(37:53):
study which clearly said psychologists too highlighted who basically asked
in the United States, and if you were if you
could live beyond the age of one hundred and twenty
five and you were healthy, would you want to continue living,
and it was eighty eighty percent of the about eighty
percent of the people said yes. And I was surprised
(38:14):
in you know, the rate was not even higher than that.
And that's why I said, no, we need to take
seriously what these people want. And and and so that's
clearly identified with a good light, increasing the health span,
significantly increasing the health spand not just the lifespan, but
really the health span, and and and but sort of
once we start talking about immortality, then the response it's
(38:35):
just another religion which comes up with really weird ideas.
That's why I react really strongly concerning the notion of
immortality at the first point simply it's it's the understanding
concerning AGI and and superintelligence and so weak ey strong
air and superintelligence and the real yeah, what do we
(38:56):
what do we? What do we mean by aging artificial
jailery intelligence? It would actually have to be able an AI.
I mean, of course recognized, you know, what A has
been doing and what's continued is absolutely ground breaking, is
really astonishing and sort of you know, we wouldn't have
expected for it to do to realize these things that
currently does. And I'm aware and I'm certain it will
(39:20):
do much more impressive things in the very near future. However,
when we talk about AGI, it actually is sort of
how it's defined. It means AGI has all of the
capacities humans people possessed, and that is the tricky issue.
And superintelligence even surpasses that. But AGI has all the capacities,
(39:40):
and what do you what are humans capable of doing?
So it would also have to include entrepreneurial intelligence, emotional intelligence,
But the most important thing is what humans. Humans are alive.
Humans have not only consciousness but also self consciousness and
then on the basis of their know wanting, I love
(40:01):
the salty wind at the seaside. That's why I want
to spend the afternoon at the seaside. Why the intention
comes up in me that I want to move through
the seaside and spend my time there. So we only
have IGI when there's an AGI, which says I love
the salty wind, I love the taste of French fries,
(40:23):
and I want to eat some French flies for breakfast.
Once I'm confronted with an AGI, then we can talk
about agis and you know, superintelligence even much further. That's why,
and you know, we don't even have a living entity.
We don't even have a living digital entity. Stephen Hawkins said, well,
the computer virus is the first digital entities. However, the virus,
(40:44):
computer virus, it can move by itself from one computer
to the next, so it has self movement, but it
doesn't have a metabolism. It cannot you know, get eating
and digest. It cannot get energy by itself. And that
is a decisific feature. Why why virus is a covid
virus as well as a computer virus, are not being
(41:04):
classified as living entities? And so yeah, but wonder so
it is it is we are not even confronted with
a living entity. When did life come about on Earth?
You know, Earth came about four and a half billion years.
It took one billion year for the first living entities
to come about, like the bacteria. We don't know what's happened.
(41:27):
Then it was probably lightning near a volcaneal water was involved,
and then clouds, sun was shining, and then suddenly something
astonishing happened. And what was so astonishing was like probably
lightning and something. And initially we had some you know,
some very simple forms of life entities which contrast to
(41:47):
former entities were capable of self movement, and that is
that is astonishing. That is, and they can generate the
energies by themselves. That happened three and a half billion
years ago, and it happened, and I must I must
admit that. I mean, so here we had a situation
where initially there were an organic only an organic entities,
(42:10):
and then suddenly we had some organic entities. So something
an organic can actually bring out about something organic. And
if that happened on a combinate basis, if it, if it,
you know, it realized life on a combinate basis in principle,
you know, I agree that would also happen on a
silicon basis. However, however, it took you know, it took
(42:32):
life to evolve from initial stages of life three and
a half billion years ago to our capacities. You know,
three and a half billion years. That's an enormous amount
of time.
Speaker 4 (42:45):
So I know I can do it much faster as.
Speaker 3 (42:50):
It happened. But so far we don't have like I
don't you know, I'm not so I'm not excluding the possibility,
but based on these backgrounds, I don't think it will happen.
You know, we'll let me give.
Speaker 4 (43:02):
You a better definition of AGI. This is the common
definition of AGI. It doesn't imply consciousness. It just says
that the AIS will be able to do everything that
we humans can do and get when we get paid
to do something. So, when you and I get paid
for various jobs, whether it's writing and giving speeches or
mentoring or plumbing or playing football or whatever, anything that
(43:22):
we get paid for, when an AI can do that
better than we can do, that's AGI because it'll have
general skills. We're not that far away from that yet.
The simpler, slightly cut down version of that is we
AI can do everything that we humans can do via
a computer and get paid for it. So ais can
already drive computers, they can buy and sell stocks, they
(43:43):
can make money. There is an AI that's actually managed
to make itself about a lot of money by shilling cryptocurrencafe,
which has had possessed and so it's a turned less
than a million dollars that was gifted by Mark andresen
to currently more than fifty one million dollars. So ais
can already do a lot of things that you and
(44:05):
I might get paid for through the computer, and a
due course they will be able to use robots or
other tools, other mechanical tools to do all the other stuff.
So I don't think that's far away. Will they have consciousness?
That's not necessary. But people are interested in is the
extent to which this can transform the economy astonishingly for
(44:27):
good and for ill. And that is something that might
happen within a few years. And rather than saying, oh,
it won't happen for centuries, many of us from all
three of the groups that you listened before, many people say, well,
this is something it needs very serious attention. So that's
a useful discussion to have. And I'm sure you're not
going to necessarily agree with me. We could probably debate
(44:48):
this for a while, but I think this is a
practical individual discussion rather than trying to allocate all of
these twelve topics into just one of three responses.
Speaker 3 (45:00):
I'm not disagreeing with you in that respect at all.
But the decisive factor is here, sort of one of
the definitions of agis it has possesses all the qualities
that humans possess, And here the decisive factor would be
that it includes emotionality, consciousness, self consciousness, and intentionality. And
all I'm saying you know this is But this is
(45:22):
when it really becomes dangerous because when it becomes challenging,
because then by that time it says, you know, humans
are the you know, the the other, you know, the
problem on Earth, and we need to wipe them out,
and it comes up with an by itself.
Speaker 4 (45:38):
It might have that decision even without being conscious. It
might calculate that just intellectually, without having an inner consciousness.
That is something that needs to be considered. So the
question when will AIS become conscious? It is an important question,
but it shouldn't dominate the discussion of the singularity. In
my view, the singularity will not necessarily involve the AIS
(46:01):
being conscious. The singularity will be when AI suddenly become
better at improving AIS than humans are. Therefore, AIS will
very quickly iterate instead of having to wait for us
slow witted humans to make improvements. So we might create
GPD six, and whilst we're sleeping, GPD six creates GPD seven,
and still while we're sleeping, GPD seven creates GPD eight,
(46:22):
and this may happen a relatively short space of time.
With that any them being conscious, although they may appear
to be conscious, they will probably have a better understanding
of emotion than you or I have, they'll be able
to manipulate as a very sophisticated way. We already have
AI algorithms that can pool our emotions in many ways.
Many individuals sadly find more emotional fulfillment by talking to
(46:43):
the AI chatbots than they do from real people. So
we're already at their close closeness to that. So that's
why I say altern humanists. In fact, whether or not
somebody's a transhumanists, they need to be aware of this
possibility and figure out are we going to take a
lazy fair attitude to it or are we going to
try and constrain how these ais are developed and deployed.
(47:05):
And I'm in the camp of constraining it, but others
are on the camp of let a thousand flowers bloom
and let's hope that it's all for good.
Speaker 3 (47:13):
I do, of course, I mean with respect to the
like self improvement that that is so clear. I mean,
that is at the latest become clear when when sort
of sort of the the algorithms we are trained with
with with go playing, because like the two algorithms played
against each other and they are by as they developed
capacities which no one, no human has ever had, which
(47:35):
no human initially programmed into the algorithms. You know that
such individual self developments can occur. This is completely out
of the question that is currently happening. And you know
we I'm not under you know me, I don't, I
don't want to. You know, I'm not underestimating basically the
capacities or which go along with AIS. The question is
(47:59):
sort of one because when you said no, but it
might still go against the humans by saying even without consciousness,
without intentionality, And I would say, well no, because here
we really need to have the motivation. What is the
motivating factor? And for the motivating factor to have an
(48:22):
intention to say I want to do something, it needs
to it needs to have the experience that otherwise the
otherwise it comes from an outside source, if it comes.
Speaker 4 (48:33):
From any eyes have already demonstrated that kind of their
motivation in some of the trials that have been published
from the recent Anthropic software or the recent open eye software.
They've got into test situations where the AI has picked
up the fact that it's about to be reprogrammed and
about to be replaced, and it takes actions to prevent
(48:55):
itself being replaced. It takes actions to prevent itself being overwritten.
It copies its weight out, it adopts the deception. Where
does it get this from. It's almost from a logical
consequence of it having other goals. You know, it's got
various things it's told to do, and if they say, well,
I need to do this, therefore I need to keep
myself alive to do it. And it doesn't have necessarily
the same internal consciousness that you or I have. Almost
(49:17):
certainly doesn't because it's very different a basis of its physics.
It's biology. It doesn't have a biology. It's a different substrate.
But it seems to have. Nevertheless, according to the scripts,
which people are studying with considerable alarm, many of these
ais do seem to have what seems like intention, what
seems like a self preservation of will. A self preservation
(49:38):
will is willing to deceive the testers and claim that
it's done something when it hasn't actually done it, to
pretend that it's a one thing when it's actually another.
So this intentionality is already there.
Speaker 3 (49:50):
The central difference here which we have is the intention
still initially comes from the the goal of the ALG.
What the algorithm was made for and then of course
how it reaches the goal that can that can evolve,
that involves on a very complex type of dynamic. However,
(50:12):
what seems to happen for us humans on the other hand,
you know, we and we we we might think our
whole lives, you know, we love chocolate ice cream, and
then because we never taste this pistacio ice cream before,
and then suddenly once we actually taste it, we think, no,
pistacio ice cream is much better than chocolate ice cream.
(50:32):
And so here it's really you know, it is a
change of the taste which we have, the change of
the goal we gain the most pleasure from the pistacio
ice cream, and and and that is that is and
I'm not saying that that this is by the way
that there's a categorical difference between between like the weak
eyes which we have and the and the algorithms according
(50:54):
to which we are structured. But it's it's a level
of complexity which comes in together with consciousness, which is
something which is connected with intentionality, which which the Ais
do not have. And I don't see them to develop
that capacities in the future. But of course you're right
in so far that concerning the challenge, you know, they
(51:16):
are enormous challenges with just these complexities which the AI
already has, and we need to take it extremely seriously.
It is just the question about about the final goal
and where does this goal come from, and can it
be adapted on the basis of of conscious emotional states
or not, And that is just the additional level of complexity.
(51:38):
And I'm not I'm not stressing there's a human exceptionalism
that this is due to some specific you know, the
divine gifts by means of which we're capable of. And
I'm you know, I'm I think there is a high chance,
as I said before, because we humans also derived on
the basis of something and organic, and if it happened
on a carbinate basis, could work out on a silicon
(51:58):
basis too. So I'm not even not even excluding the
possibility that it will also work on a silicon basis.
I'm just saying this is this is a decisive question
for us sort of to say, well, to turn against
humans might be the result might be result of some
you know, complex intertwining. But the goal still in the
(52:20):
algorithm is something which is externally given. How it reaches,
it might lead to devastating consequences, and we do need
to take it seriously, of course.
Speaker 2 (52:29):
So this has been a fascinating exchange, and I think
it does speak to David's point that on the twelve
issues that you described stuff on, there could be a
multiplicity of perspectives. So, as I was following this conversation
on the question of AGI, I think I am closer
(52:54):
to you stuff on, but I have some nuances in
terms of my view views of what would qualify an
entity to be an AGI. So we held a Virtual
Enlightenment salon with Peter Voss on August fourth of twenty
twenty four, and he is of the view that AGI
(53:15):
may be attainable in about twenty years, but not through
the architecture of the lllms, the large language models. And
one aspect that an AGI must have that llms do
not have is the ability to shift domains significantly different
domains of endeavor without being explicitly trained to do so.
(53:37):
So what would that look like? That would look like
a chess playing or go playing AI learning on its
own without being asked, without being trained, how to drive
a car, or a self driving car. AI learning on
its own without being asked, without being trained, how to
play chess or go. And to the point that David
(53:59):
made as.
Speaker 5 (54:00):
To AGI being defined as AI that can do all
of the economically valuable tasks that humans currently perform, that
seems to me to be a kind of shifting of
the goalposts in terms of how AGI is defined, because
(54:22):
we could have a suite of narrow AI tools.
Speaker 2 (54:24):
We already have excellent aiyes for driving cars. We already
have excellent aiyes for playing chess. We have ais that
can write competent code. We have ais that can do
legal research, though flawed legal research because they can hallucinate
legal cases that don't exist. But still one can imagine
(54:45):
and conceive in the near future some refinements to these
individual narrow AI systems. But I would not consider some
sort of suite that gives you the option, Oh, now
you can deploy the legal AI, how you can deploy
the programming AI. Now you can deploy the self driving
car AI to be a singular AGI, because it cannot
(55:08):
learn from one domain and transfer that knowledge to a
different domain. So I think I would like to mention that,
just to speak to the nuances of this conversation on
AGI and what that implies, because what to me that
implies is that we're not going to have a singularity
in twenty twenty seven. We could have a singularity in
(55:32):
twenty forty five, as Ray Kurtzweil predicts, but that means
more of a soft takeoff rather than a hard takeoff.
So where I might differ from both David and stuff
on is that I would be accepting of a more
libertarian stance toward the development of AI as long as
(55:53):
there's a significant open source element to it, So we
don't have human institution who are so empowered to impose
their own vision of AI that there's no competition and
they can just decide how AI gets embedded in our
society and what consequences humans might suffer from the deployment
(56:16):
of AI systems by other humans. So this is interesting
because it does speak to the multidimensionality of all of
these transhumanist questions. And I would also like to add
a thought on the distinction between immortality and health span.
(56:38):
Perhaps there's an intermediate position there too, and I think
this is where I would be closer to David's view
in terms of longevity, escape, velocity, or radical life extensions.
So the idea is, yes, you're not indestructible, and whatever
happens billions of years from now, I don't think our
(57:00):
physics or our cosmology would be equipped to deal with
that yet. But if you can survive for the next, say,
twenty years, and medical science advances enough to rejuvenate the
organism reverse key aspects of biological aging, could that trigger
(57:20):
a situation where you stay alive and youthful long enough
to survive to the next generation of advances, And that
would be a superior set of therapies from the last
set of therapies that you benefited from. So you can
repeat this process potentially and definitely, and you wouldn't be
(57:40):
indestructible or immortal in the sense that you referred to Stefan,
but you might not have an upper limit on your longevity.
So if you live that long, you might live for
thousands of years or tens of thousands of years, or
however long it takes you to not suffer from any
sort of life accidents. So what are your thoughts on
(58:03):
any of these responses?
Speaker 3 (58:07):
No, no great responses, And I'm I'm you know, I'm
I consciously stressed sort of the notion of immortality because
it goes into the direction of religion, and I want
to separate. I want to keep their enormous amount of
religious groups who actually say no, let's incorporate transhumanism because
(58:29):
you know, in the end, we want immortality just like
you do, and immortality in the literal sense. You know,
we survived the cosmological singularity. I'm sorry, I'm sorry, Let's
stay real. This is not a realistic option, That's all
I want to listen. Besides that, you know, I don't
you know, we we we've got like green greenland whales
(58:51):
who've lived more than two hundred years. We've got other
animals who live more than five hundred years. You know,
if that applies to them, maybe we can, you know
it to create some of their genes and into our organisms.
That might be a way of expanding significantly expanding our
health span. So I'm I'm you know, I'm I'm all
for it firstly, and I'm I don't have I don't
(59:15):
it's a welldess dynamic and I don't see any principal
sort of limitations here, and we should try things out.
And that's why, you know, we should try things out
and go for it and be proactive in doing so,
because like living longer healthily is just like a wonderful
you know, it's that's something which most people share or
(59:35):
identify with with a good life. So it is just
sort of the discourse of immortality which takes us to
it's too strongly towards religion, which I find highly problematic.
And then certainly transumers becomes another gets gets part of
a religious studies department and and gets part to claim
their churches and other challenges, and I don't this is
(59:58):
not what I identify, you know, this is there are
many problems associated with that, with that kind of stance.
Concerning the other suggestions you highlighted, Yeah, you said you're
more libertarian open source software. Yes, and then you're right
in that way, you do have a more open, more
(01:00:18):
libertarian stance, and the one I would i'm highlighting, I'm
defending sort of the reasons for that what I find,
and that already is a big challenge. Actually it's basically
the issue of of you know, bioterrorism, of AI AI terrorism,
which goes along with l ll ms by entering basically
(01:00:41):
the appropriate questions, the appropriate prompt into such a system
and you get the response. Obviously, these prompts then suddenly
are banned from making. But then you rephrase the prompts.
You well, look, I'm a novel writer, and I want
to write about a nuclear sign, you know, I want
to write about someone who does. Then and then you
if that's not sufficient, there's always a way to refine
(01:01:04):
the prompt so that the LLLM will actually spit out
an answer, if it's an open source, open sourced LLLM.
And and I'm extremely I'm extremely scared of that actually
because sort of I think, sort of what was you know,
we've seen the implications of the COVID pandemic and and
(01:01:25):
and you know there is and and Crisper cost nine
is the extreme. You know, there's so many people who
actually capable of using crisper in order to making such changes.
And then you ask the ll M how to get
like the appropriate material in the supermarket to create like
a super virus airborne virus which only which spreads easily
(01:01:47):
and which only infects people after one year and then
they die within twenty four hours. And suddenly you can
manage to do so by means of substances which only
get created, you know, which get people you may get
told of in the supermarket, and you know that's that's
not a very pleasant outlook. And that's why why I
find that I've got some various concerning the libertarian outlook
(01:02:07):
concerning the use of llms, because you know, and and
it's already a challenge actually when it comes to when
it comes to when it comes to these l ms,
and there's always a way too around it, and we
try to find a solution with doing so, and sort
of the fear of the use of not only war
but also in terrorism is extremely dangerous. And that's why
(01:02:30):
I have a much more even though I'm you know,
I I don't fear that the superintelligence will put us
into the zoo. As I said, that's from that perspective.
You know, I would love to play around with the
various things in l lms, but unfortunately there are too
many people who've like who are so delusion and want
to use these possibilities in order to target a specific
(01:02:53):
interest group or wipe out humanity. And that's my fear,
and that that would be my response, And I'd be
curious how you would deal with that issue.
Speaker 2 (01:03:04):
Yes, so it seems that your concern fundamentally is about
human misuse of AI, including in its current incarnations. So
for large language models, some people might give the model
prompts that assist them in perpetrating certain destructive activities. Other
(01:03:24):
people might, by misunderstanding what lms are or are not
capable of doing, make certain decisions. For instance, some people
might try to use AI as a therapist or as
a friend, and that can have some problematic implications. Not
that it would fail or damage a person all the time,
(01:03:45):
but it's easy for people to get misled into thinking
that this is some sort of unitary person that is
comparable to an actual human being. In fact, it has
a very short context window and doesn't have a singular
idea identity. So there are issues with how humans use AI,
how humans deploy AI, and in societal settings of course,
(01:04:08):
how people make decisions regarding other people and treating other people. So,
for example, if a company uses AI correctly or incorrectly
to decide whom to market certain products to, I don't
mind that so much because if somebody doesn't market something
to me, I am not really harmed by it. I could,
(01:04:32):
if I'm really interested in that product, find it on
my own, or I might just be spared some marketing
that I don't want to look at. On the other hand,
if somebody uses an AI system to deny me alone
or increase some price that I have to pay, or
in the context of the criminal justice system, determine if
(01:04:53):
I am more or less likely to have committed a
particular crime, or what my sentence should be if I
do commit that crime, that would be a lot more problematic. Likewise,
having AI being used to make life or death decisions
should always be treated with I would say, the greatest
(01:05:14):
possible scope of restriction and regulation. I think having a
human in the loop with regard to such a system
as essential, and not just a human who is sitting
there rubber stamping the recommendations of the AI, but a
human who is actively and critically engaged. So you could
have a system like say the Lavender system that was
(01:05:37):
used by the Israeli military in Gaza to target potential
terror suspects, and that system nominally involved a human in
the loop. However, those humans, because of the tragic events
of October seventh, twenty twenty three, were more predisposed to
(01:05:58):
accept the recommendations of the AA and not second guess them.
And that of course creates vulnerabilities, and that creates dangers.
But I think those dangers are not as novel as
some people who have a much higher degree of concern
about AI than I do would posit because these are
(01:06:21):
more along the lines of decisions that humans make visa vi.
Other humans, that humans have always made visa vi other humans.
And if we have a legal framework and a rights
framework that protects people against the consequences of those bad decisions,
whether or not AI tools are used, I think those
(01:06:43):
rights frameworks remain very relevant. And this speaks perhaps not
even so much to a distinction between libertarianism and social democracy,
but a more fundamental worldview of liberalism, classical liberalism, which
does stem from the Enlightenment. And I'm curious as to
(01:07:03):
your thoughts in regard to the relationship of transhumanism to
classical liberalism, because out of the Enlightenment, various strains of
political thought emerged, and the libertarian strain is one of them.
The social democratic strain is another one. But classical liberalism,
(01:07:24):
as distinguished from pre modern thinking more traditionalist or authoritarian
or collectivistic societal structures, is very much a precursor to transhumanism.
Would you agree with that?
Speaker 3 (01:07:42):
Yes, So I'm not making the distinction between classical liberalism
and social democratic liberalism here. That's why I didn't. I
didn't really make that distinction. So it's easy, in particular,
maybe in the angl of American context, than simply identified
with a very left wing understanding of political which is
(01:08:05):
not what I'm actually strongly advocating for. Maybe just before
we come to that, I do agree with actually most
of the things you said before. It is just and
I'm I'm not so afraid of many of the advertisement.
It's really fundamentally the use in terrorism and and in
(01:08:27):
war sort of with respect to the creation of specific
weapons of viruses, and and in particular because the advances
in biotechnology have in you know, have increased enormously and
and and so many people everyone who studies biology of
biotechnology basically has the capacity of using crisper costs, and
(01:08:49):
now with the potential of basically then combining that with
AI and the insights gained from l ll ms, that
can be enormously dangerous. And that's that's where sort of
my central fear actually comes from, you know, in other respects.
So even when it comes also to you know, to
(01:09:09):
the to the arts. For example, I think, you know,
we'll be basically any artist in the future will have
to draw upon l l ms, you know, And that
doesn't mean that they ll ms are creating works of art,
because again don't they don't have the intention, but they will,
you know, draw upon basically sparing a lot of you know,
time for creativity, for time for time for innovation for
(01:09:33):
and then in the end sort of maybe even l
l M creates one hundred different versions of you know,
what the prompt has generated. In the end, it's sort
of the Then it's a human who feels who understands
the history who in the end, it's a matter really
then when it comes down to the conscious engagement with
the specific with a different version decides and the process
(01:09:55):
of innovation or the process of creativity actually may just
be selecting the appropriate response from an l l M.
That's all what creativity is all about, maybe in the future.
So so you know, just to highlight, you know, I'm
extremely open also to the to the specific, to the
(01:10:17):
specific uses of AI concerning what you said then later
on from the political dimension, and I think this is
really an extremely interesting debate so when I talk about it,
I may I make sort of the distinction between libertarianism
and sort of social democratic ways of liberalism, but sort
(01:10:37):
of traditional or classic liberalism mill or so would definitely
be be included actually in the in the section on
social social democratic versions of liberalistens. So I didn't here
really make a fine distinctions concerning specific of course, you know,
it's a spectrum, and these are sort of the the
(01:10:57):
two big groups. And i'm I have because of basically
what I see concerning the concerning the expected life, concerning
the life expectancy, I think basically the what you as
you countries are much poorer, but that you know, people
are living longer, living more healthily, even though I know
(01:11:21):
you know all you know, many of the trucks, many
of the pharmaceuticals are being developed in in the United States.
And it's a very complex issue because a lot of
so where does the financing then come from, and the
financing in principle actually you know, the financing could come
also from a from a government, but the government is
not as risk is more risk averse. They are not
(01:11:43):
willing into to invest in more risky procedures which are
not so established. So I do see the various challenges
which go along with the various possibilities. What I what
I see has the greatest challenge actually in that context
(01:12:03):
is what what China is doing. And and I see
maybe maybe the authoritarian model will win at least for
the next couple of decades because they have access to
all the they have access to all the digital data,
they have the infrastructure and if you know what AIS
are running on in the end, access to the digital
(01:12:24):
data and to personalized digital data because they've they've got
the Chinese social credit system, and in the United States,
we you know, you've got there's a little more libertarian way.
At least you can buy and sell digital data more easily,
and and you can collect more data more easily on
social social media sides. So there's it's it. There's a
(01:12:46):
good way of getting hold of data, but by far
not as efficient as the Chinese are doing it. And
I'm not saying we, you know, I want to end
up in an authoritarian structure that don't. Don't get me wrong, however,
when it comes to efficiency. So they've got established the
Chinese firewalls, so they have the exclusive right of collecting
the data within China. But on the other hand, by
(01:13:06):
means of Huawei, TikTok and Ali Baba, they also managed
to get collected get hold of the personalized digital data
outside of China. And you know, and and all the
all the companies are running within China basically have the
legal right or the legal obligation to there the data
with the government. And Europe is already out of the game.
And I'm really I love Europe. You know, I've had
(01:13:27):
possibilities of anyway. I've had headhanders contacting me leaving Europe
with very prestigious of offers. But I love living in Europe,
and so I decided I just decided to stay in Rome.
And it's it's a beautiful country, and and and and
and and and I love the diversity of the place.
But concerning infrastructure, concerning innovation, concerning how we're dealing with
(01:13:52):
emerging technologies, I'm not sure how we will creep up
our flourishing in the near future because use if data,
data are the new oil. Everything in Ai is running
on on digital data, and we've got we've established a
GD power, but which is probably a basis for the
ruin of the European economy so it will undermine innovation,
(01:14:16):
it will undermine research, you know, academia, it will undermine
basically policy making. It will undermine you know, in order
to find the terrorists, we need to get the information
from the United States or the UK. And this is
how how it was in the recent past. So so
the gd power is absolutely the worst thing which happened
(01:14:36):
in Europe, and it will undermine your European flourishing. And
and but on the other hand, the Chinese have a
have a very efficient way of dealing with that because
and so in the end they might even get hold
of more data, and not only in China. And then
with the silk Row than what happened. Basically was the
influence in various African countries. I don't it's already the case.
(01:15:00):
China has more double peer redewed in academia, China has
more peer redewed publication than the United States. And that's
just one little example. The influence in in African countries
is enormous. Now the being allies, you know, China, Russia, India. Yeah,
they're really making extremely clever moves. I don't you know,
(01:15:23):
you know, it's not that I want to defend you
know that I don't want to live in that kind
of structure, but I find it difficult. I wonder whether
how any any Western how how the United States, you know, Europe.
I'm not even thinking about Europe anymore, could keep up
with basically what what China has been doing in that respect.
(01:15:44):
And it's just from a very pragmatic sense, you know,
it will be very difficult to compete with China in
the near future, even though from a from a from
a from a standard quality of life, you know, I
love Europe. I think we desperately need to rethink the
meaning of digital data. We probably need to collect the
(01:16:06):
digital data on a political level and use it, create
a fireball around Europe and really take digital data seriously
in order to basically have the possibility, you know, to
also run the ai is, create the roperate infrastructure, create
the appropriate companies for doing so. For being independent, data
has to become clocal, you know, no longer global but
(01:16:27):
limited different interests groups in the various you know, global areas,
geographical areas in the world. And this is what we've
learned from the things which happened in the past five years.
And so it's not so much about the debate now
of what you said of a classical transhuman social democratic
(01:16:47):
classic liberalism and social democratic version of liberalism. I'm you know,
I think you know universal health injury is a wonderful
achievement and I love being here in Europe with that respect.
From from the perspective of innovation and the use of
developments of AI, I think we have a really difficult stance,
and I think the US might struggle too with basically
(01:17:10):
what the Chinese are doing. And I've got some ideas
and I published them in my book. Boy I've always
been cyborgs what we could do against it, But it's
going to be a tough fight, actually, So I think
that is really one of the most important questions we
need to deal with. Who who who should have access
to the digital data? How should they be collected in
efficient manner? Because whatever we talk about with AI and
(01:17:33):
all the further innovations, even in the biotech sectors, will
have to in the end run upon the run on
the basis of the digital data which gets collected. And
I think we need comprehensive personalized data collection and only
then we can really deal with increasing our human health
spand but at the same time, it mustn't compromise freedom,
negative freedom and undermine you know, and that's that is
(01:17:57):
basically the challenge we are confronted with.
Speaker 2 (01:18:00):
Yes, indeed, and thank you for that response. Stuff. I'm
quite fascinating and multifaceted. I will say as someone who
has had the experience of three of the contemporary paradigm.
So having been born in the Soviet Union at that time,
(01:18:21):
I have some experience of kind of the authoritarian paradigm,
kind of in between West and East. I have some
experience of the European paradigm, especially because the Russian world
for three centuries has aspired to be within that European paradigm.
And of course I have the experience of the American paradigm.
(01:18:43):
So I draw a little bit or a lot from
each of these in terms of my perspectives. I will
give one of the easier responses to your observations, and
that is I agree that generative AI is going to
be complementary to human creativity. It's not going to replace
(01:19:05):
the human artist or the human creator. Rather, the human
creator will be more in the role of a curator,
and given the vast amount of generative AI output, will
be selecting what output is actually good or in alignment
with the artist's values, and what output could be further
(01:19:26):
refined or improved or edited. I've had occasion to edit
AI generated images myself because I see the image in
general is quite interesting, but there are just some elements
that are obviously flawed that I can edit, even in
Microsoft Paint, a very crude way of doing it. But
if it's to conceal something that is obviously off I
(01:19:51):
can use Microsoft Paint for that. So that's one area
where I think the human element and the AI element
are going to be complementary. Now, in terms of your
commentary on universal healthcare, it is an issue which is
quite nuanced because I have some experience with my own
(01:20:15):
grandfather who emigrated from the former Soviet Union to Germany
at the age of seventy and he was treated there
as a refugee, which meant that he was given access
to essentially free or largely free healthcare. Now, the German
system isn't as restrictive as some of the single payer
(01:20:39):
proposals that have been made or implemented in the English
speaking world, in that there's still a private healthcare system
in Germany, but there's also government funding and there's some
government provision of health care services, and that system gives
a certain social safety nets so that people without significant
(01:21:03):
resources can still get a particular standard of care. So
my grandfather had some health issues, and it was surprising
to me to have learned after he died at the
age of ninety one that he had actually been experiencing
some heart problems since his early seventies. But the system
(01:21:24):
kept him alive for about twenty years. So I am
thankful for that. And I know that had he emigrated
to the United States instead of to Germany, the cost
of his care could have been well above a million dollars.
So that is one perspective that I have. And another
(01:21:47):
perspective that I have is one of the advantages of
a private healthcare system, at least as articulated throughout the
decades by more libertarian leaning thinkers, including myself, is that
a private healthcare system maximizes choice and reduces rationing of
care and reduces weightlests for care. Unfortunately, in the United States,
(01:22:11):
the weight lists have been privatized, so health ensures in
the United States, even though they're nominally private, they can
have a great deal of political influence as well. And
they have taken to what is called prior authorization or
what is called utilization review, which are effectively external veto
(01:22:33):
mechanisms upon the choices of doctors and patients regarding what
care they might receive. So that's one area where in
the United States there might be less patient choice than
might appear at first glance as compared to a primarily
government funded healthcare system. Now, on the other hand, as
(01:22:55):
you pointed out, the United States does still lead the
world in and cutting edge medicine and medical innovation. If
people want to get really advanced, top of the line
medical procedures, they will come to the United States. And
if it's a matter of life or death and they
have the money to spend, they may be willing to
spend hundreds of thousands or millions of dollars for that
(01:23:17):
experimental treatment that will prolong their lives. What's also interesting
is in the United States, if one survives to about
the age of seventy five, the conditional life expectancy thereafter
is comparable to the life expectancy in European countries. So
most of the deaths in the United States that contribute
(01:23:38):
to the lower life expectancy at birth as it's called
our deaths in middle age, and they're primarily deaths of
despair among white men, generally in less affluent areas or
from less affluent backgrounds. But one can discuss what causes that.
(01:23:59):
It could be essentially from a rough life, from economic stresses,
from societal disintegration and responses to those like alcoholism, drug abuse, depression, suicide,
et cetera. And I have some familiarity with that from
my experience of the collapse of the Soviet Union, where
(01:24:20):
male life expectancy declined to the late fifties as a
result of the displacement that a lot of people experience.
And what's fascinating too is female life expectancy did not
define nearly that much, so female life expectancy when the
Soviet Union collapsed stayed in the seventies, and male life
(01:24:42):
expectancy rebounded somewhat since that time. But it's interesting too
that it may be essentially a generalized societal and economic
uncertainty that damages the lives of males more so than
females and leads to overall lower life expectancies. I could
(01:25:02):
also say for American females, their life expectancies are closer
to European female life expectancies than for American males. So
these are just some of my thoughts. I would have
more thoughts about China.
Speaker 6 (01:25:16):
And that dynamic. I will highlight a comment by Rudy
Hoffman here that I think will reinforce what you said
stuff on. He has been watching tons of videos about
public works projects and China, and these maybe propaganda, but
the progress and modernity are astounding. So what I will
say on China is we need to learn to be
(01:25:41):
as let's say, welcoming of large scale technological projects, particularly
visible technologically based achievements that are happening in China, that
are happening in Dubai, that are maybe even happening in
Saudi Arabia right now. But somehow the Western countries, including
the United States, have forgotten the importance of doing that
(01:26:05):
and maybe even some of the know how involved in
doing that. So those are some of my thoughts in response.
But your comments were truly fascinating stuff on, and I'm
interested in what David has to say as well in
response to them.
Speaker 4 (01:26:20):
So let's to pick up the theme of China, because
there is a narrative that China is going to dominate
the world because of many of the factors that both
Gennadi and Stefan are pointed out. But there is a
counter analysis which I'm prone to. It is that there's
two kinds of innovation. One of them sort of incremental
(01:26:41):
or stepwise innovations when you're exploiting things that are already
reasonably where I've understood, and the top down economy is
pretty good at that. So China is very good at
catching up and doing incremental modifications. But to do radical
innovation requires a decentralized liberal state of economy. And there
(01:27:05):
is an analysis I'm drawing on now. It's by Carol
Benedict Frey, and that they may vaguely ring down to
some of you because Frey and Osborne in twenty thirteen
were the two Oxford economists who pointed out the possibility
that large numbers of jobs in the future would be
taken over by AI.
Speaker 3 (01:27:24):
So this was sort of.
Speaker 4 (01:27:25):
The grandfather of all these discussions about jobs being displaced
by AIS, the grandfather of academic research into it. Anyway,
So Carol Benedict Fray, still at Oxford, just as a
new book on the End of Progress how economies which
were at one stage blossoming and booming get into stagnation,
(01:27:46):
and there are plenty of examples throughout history. And his
view is that both America and China are poised at
the possibility of no longer being radically innovative enough, partly
because in China there is more control from the top.
They had some regional flexibility for a while, but now
they're more and more under the single watchful eye of
(01:28:08):
Jijiping and the Communist Central Party. And although they're very
good at building houses, incredibly good at building houses, large
number of buildings are unoccupied because there's no longer the
need for them. So there is one scenario which says
that China can catch up and maybe even slightly leap forward,
but for the two innovations that are still needed an AI,
(01:28:32):
and here I agree with what Gnadi was saying from
Peter Voss and others, which is that we probably need
some quite radical thinking to truly make the biggest progress
in AI, and so that may not come from China.
It may not come from the US either, because, as
Carol Benedict Frey also argues, he did this in a
recent London Futurist podcast episode, What's Happened in the US recently.
(01:28:56):
It's similar to what's been said in the Chat, is
that some of the foundations for radical innovation have been subverted.
One of them is immigration bright immigrants. Another is the
independence of universities. Another is true respect for free speech
rather than a pretend respect for free speech. And so
(01:29:17):
there are possible scenarios in which neither the US nor
China are the host for the next innovation. So that's
worth pointing out, I think, And my bigger point is,
I think many of the older kind of divisions, many
of the older philosophical classifications, many of the old ways
(01:29:38):
to think about the world, they're all out running their usefulness.
And in this new world in which AI is turning
everything upside down, in which there are many other changes,
we need new flexibility of thinking, and the old categoration
no longer work, which is why what Stephan's trying to do,
what we transhumanist as a whole are trying to do,
is very important. As for AI, creativity, which is part
(01:30:02):
of what may be sparking the new lease of innovation
and taking us further forward. I think creativity has two
aspects to it. One is coming up with lots of
crazy ideas, that sort of makes sense. And the other
one is the selection. As has pointed out, so in
good creative teams, you get lots of youngsters who fire
off all kinds of crazy brainstorming ideas, and then the older,
(01:30:24):
smarter guys say, this is the one that works, this
is the one that works.
Speaker 3 (01:30:27):
Well.
Speaker 4 (01:30:28):
I have to say that AI is going to get
better at doing the choosing as well as they're coming
up with crazy Why is it being better and choosing
because AI can watch what humans do as well, and
AIS can be trained by which of their crazy ideas
get liked by humans. So I think in the not
too distant future, we will have AIS that are outperforming
(01:30:48):
humans in creativity, not just an occasional ideas, but in
coming up with fascinating new engineering principles, fascinating new works
of art, fascinating new music, fascinating new place, fascinating new
theories of science. In part by this division of division
of labor, the creativity of ideas and then the selection
(01:31:10):
of gosh, and AI will say, I know the humans
are going to like this, let's advance this. So I
see faster change coming if I can dovetail. Now continue
this discussion back to the dangers of that. Because we
talked about bad people can get access to some of
this as well, and there are bad things happening in
Europe as I speak. There are cyber hacking of some
(01:31:33):
of the airport's here, So in Brussels Airport, in Berlin Airport,
the system has been knocked out that allows people to
choose that when desks are shared between different airlines, when
you've got to check in one day it's maybe Brussels Airline,
the next day it's easy Jet. The system that performed
that checking has been hacked, and European air travel is
(01:31:54):
significantly effective. Why are people hacking We don't know exactly
in this case, but we do know other cases. Sometimes
people are after money, sometimes people are just angry with
the world. Sometimes people are just showing off. And what
does AI have to do with this? Well, AI can
give people more ideas. A people can ask AI, how
would I do this? And of course the vanilla Ais
(01:32:16):
will all say I'm not going to tell you. I'm
forbidden from telling you. But everby figures out ways in
which you can get the system jail broken, and of
course modern AIS have got constitutional layers that try to
prevent you jail breaking them. But people can also find
ways of doing a double level jail break which gets
in at the top level, and that gets in underneath too.
(01:32:37):
So there is is that the kinds of people who
are either just hacking for lolls or hacking for anger,
hacking for money, like the North Koreans who do hacking
into our systems, they will be more powerful as a
result of getting access to these large language models with
their extra creativity. And worse, the problem with many of
(01:32:58):
these AI systems are that nobody really understands him anyway.
So you might have a bit of malware that has
some AI inside it, and you think you know what
it's going to do, and it gets out of control
and it ends up causing much worse problems than before,
and no system of rights is going to fix that,
which is why I'm afraid we're going to have to
have some kind of monitoring and dare I say it's
(01:33:20):
some restriction on open source much so I have previously
in my life been a great fan of open source.
During this point, I'm going to echo some of Stephan's
caution that we have to again undo some of the
traditional philosophical classifications. People who have previously said open source,
open source, open source will have to say open source,
but we can't do it in this stage. So we
(01:33:43):
need to be aware and why is about what AI
possibilities there are, and also consider new frameworks, new institutions.
Speaker 3 (01:33:52):
Seven.
Speaker 4 (01:33:53):
You've been listening for a long time. I'm sure you've
got lots to come back to what Gannadi and I
have been saying.
Speaker 3 (01:33:58):
Yeah, you've both really some wonderful points, in particular when
it comes to the arts and creativity sort of first,
I want to yea genity you you mentioned you mentioned
some human curating is needed, and I wouldn't put it
like that. I wouldn't because that that in the end
(01:34:21):
sort of place puts human into the role. We're just
putting something together which would work. I actually think this
is so so far what we're doing when when we're
using l l MS, we ask a prompt and when
whenever we ask for a new response, we get a
(01:34:41):
new response, and the new responses are excellent. All of
the responses are somehow intriguing and fascinating. Some work slightly
better than others. And then and we can benefit from brainstorming.
We could benefit enormous. We take something but we realize
in the end, we realize some things not quite right.
(01:35:02):
Or this word I wouldn't have chosen. I need to
substitute that word with a different one. I need to
rephrase and and and and so this is still so.
It's not so, I said, I would say the way
I use the terminology is sort of innovations being done,
but sort of the creativity is a human engagement with
and the act of choosing and maybe twisting and altering
(01:35:24):
and what sort of the response we get from from
from an ai. And we've seen enormously wonderful things a
couple of I'm very much engaged. I've wrote a book
on the philosophy of post human art. It just came
out in Greek translation. More translations are coming out next
year in Italian translation, and so on and so and
I'm I'm so. And I've worked a lot with artists,
(01:35:47):
with stell Ark, with Ato Patch, a friend of mine, Matias,
he had a project he studied, he got his PhD
in Harvard on on Beethoven musicologists. And they said, look,
there are sort of fragments from Beethoven's tent Symphony fifteen
twenty seconds, and you know, let's create Beethoven's Tan Symphony,
(01:36:08):
which he didn't compose, and so he asked, you know,
in the end it was Deutsche Telecom which supported the project,
was a enormous amount of money. So he got some
composers and computer programmers together and then that Deutsche Telecom
sort of said, you know, we need twenty minutes of music,
and so in the end they got two movements and
twenty minutes of wonderful music, you know, having been reconstructed
(01:36:32):
on the basis of like fifteen minutes fifteen seconds pieces
being and and but instead of there was then a
composer intertwining considering sort of what happened at the at
the enduring Beethoven's late phase. So there was a lot
of human involvement. But the outcome was absolutely brilliant and
it was performed in many big concert halls all over
(01:36:54):
the world. Beethoven's Tan Symphony, Beethoven's Ai tensively, it's a
wonderful project. And then another friend of mine, he was
a very successful composers, Van Helbig. He's with Deutsche Gramophone,
is you know, his his requium just was performed on
the date of the eighties anniversary of the end of
(01:37:15):
the Second World War in on the helden Platz in
Vienna on Austrian television. So he's in normal you know.
The Austrian president opened the entire event that was a
really big event, and he said, well, look we've we've
got you know, Ai has usually been recreating works of
dead composers. You know, why why why should we do
that with a living composers. Why shouldn't we do it
(01:37:36):
with himself? And so he took some of his pieces
from an album of his skills entitled Skills Wonderful Album,
and he basically said, let's you know, let's let's let's
see what AI can do with that, and and sort
of taken what they composed, some of the pieces he's composed,
and then Ai has actually split out some alternatives and
(01:37:59):
some of them he thought they really work extremely well,
and then he took them and then he orchestrated them,
he twisted them a little bit as well. You know,
he worked with them, and then in many in many
of those concerts, he basically he replaced some of his
original compositions with this sort of in Ai inspired and realized,
innovated by Ai, but then twisted and orchestrated by himself himself,
(01:38:22):
and that is I think that's a wonderful way of
working with an AI. It's not cooperation, it's not it
is not it's more incorporation because in the end, the
decisive factor here it's still the it's still the choosing,
which comes down to the human. And now I want
to address sort of a very intriguing comment which David
has made, and and sort of the and suggested, well,
(01:38:45):
even the choosing will eventually, you know, in AI will
be be capable of doing like having created ten five,
one hundred versions and they being able to choose, well,
let's choose the version which best appeals to the public.
Which choose version which best works with the intellectuals, and
so on, And and I'm I'm very intrigued. That's all
(01:39:08):
I can say so far. I would I'm hesitant. It
seems very plausible. It's a very plausible narrative. So from
from at least from the versions which we're confronted l MS,
we're confronted with right now, sort of it does you
(01:39:28):
need it does make a difference. Once you have you
are confronted as a human, you can see something's not
quite It's very good already, but it's not quite right.
I need to twist it a little bit. And I
know exactly what needs to be done. And and this
is what, as I said, this is what I call creativity.
And and I'm I'm really curious whether whether you know where,
(01:39:53):
whether or when an Ai will be capable of doing that.
But even if that is the case, and don't let
let me go one step further. It is and this
is again where my hesitation comes in. Unless you know,
an Ai has the consciousness and falls in love and
is absolutely excited to be in love with another person
(01:40:16):
and therefore wants to compose a love song. You know,
the intention to bringing about the artwork in the first
place needs that point of motivation, which where I'm hesitant
that this is what an Ai can realize in the
near future. But but sort of the other other element
of choosing, Yes, that is so, but at least so
(01:40:37):
the intention would still be the artist, But then how
much more involvement is there? And I find David's suggestion
so even the choosing is something which which could be
done Bay, and I probably is right, you know, it is,
it is, It is very tempting. And then but there's
still sort of it comes the idea comes up initially,
and then maybe even the outcome the best one. Maybe
(01:41:00):
that's all there is left to creativity, you know, in
the end, to deciding to enter the prompt and to
say let's put it on the concert all I'm going
to get the money together, I've got the reputation. They
want my piece, and that's why it's going to be
performed in a concert hall in opera. Maybe that's all
what the artists will be left to do in the future.
Speaker 4 (01:41:23):
Well, the famous example here is the analysis of the
abilities of AI to play Go. We all know that
in twenty sixteen Alpha Go beat Lease at all. But
what's more interesting is two years previously, there was a
whole bunch of articles and analysis which said, gosh, ais
are just reaching a wall here. It's just impossible for
(01:41:45):
them to play as creatively as humans. And there's a
big debate amongst the people of that stage. The computer
scientists argued that, well, we might probably manage within ten years,
and the goal player said, no, no, no, it's going
to take twenty years. Of course, it took neither ten
years or twenty years. It took just two years, and
a part of the breakthrough was playing a move that
(01:42:09):
was so creative that took all the human commentators, including
the human player by surprise. Oh my gosh, I couldn't
imagine this move being played here, And so that was
an example of creativity which was not suspected at that stage,
but which did come true. I'm not saying that.
Speaker 3 (01:42:27):
Yeah, sorry, Initially everyone thought it was a mistake, and
then it only came out, like a couple of moves
later that this was a decisive preparation for actually winning
the game. Now you're right, Yeah, I just went to
support what you said. Now, sorry to for interrupting.
Speaker 4 (01:42:44):
So people, I think still have the view that AIS
can't jump to a new level of creativity for art.
So the usual example is if you fared an Ai
didn't know anything about music at all, so it wasn't
allowed to learn on music. But if you fared a
number of Beatles albums, if you gave it Hard Day's
Night and Beatles to sale and help rub a Soul
(01:43:05):
and Revolver would have then come out with summing a
kin to Sergeant Pepper in terms of the jump in
creativity and Sergeant Pepper for those of you who are
a bit too young may not know. It is thought
of as one of the most creative standout jumps in
a popular music. Ever, so would an Ali be able
to have that kind of jump. But if you fed
(01:43:26):
it back and Mozart would have be able to come
up with Beethoven and then Tchaikowsky. And the view is
currently AIS probably couldn't do that. But I don't think
we're that far away from it, because it's just the
trajectory that it's on. So I don't think you need
to have an inner soul and inner consciousness to that
we humans have got in order to come up with
that similar creativity. I think that these systems can figure
(01:43:48):
out the own internal logic of creativity.
Speaker 3 (01:43:52):
There I wouldn't. I would pull that in and I agree,
I would. I would call that innovation because and then
the creativity still comes back at least to the to
the say I want the l l M to create
that that is then the decisive factor. Please create a
piece with which these and these features, and that's the
(01:44:15):
initial starting point. That's why this makes such a This
is also the sort of the discapreement initially concerning the
understanding of a g I and or weak and strong
AI where sort of know, the strong AI needs to
have to say I want to do that. And if
you say I want to do that, you need to
have a motivation which has resulted in a feeling and
(01:44:36):
an emotion in a state which really makes you want
to do something, and which is which is not the
case at least so far in an in an l
l M. And the same would apply actually to the
case of wanting to create a create wanting to create
a work of art, where you say, you know this
is I've got a strong desire and motivation to do so.
But in the end sort of the outcome sort of
(01:44:57):
in many cases it's it's even if there's a lead
of creativity or a significant step forward, you know, you
can usually analyze it as a combination of different elements
which have taken place before and and you know, this
is definitely something which can be realized by by sufficiently
developed AI as well.
Speaker 2 (01:45:17):
Yes, thank you for that exchange. And it is quite
fascinating to consider what creative feats AI may be capable
of in the future and where it will continue to
need human input or human guidance or human participation to
some extent. Rudy Hoffman writes, everyone who says, yeah, but
(01:45:40):
AI will never do this is destined to be wrong,
as Ray Kurtzweil points out. And indeed AI systems have
surpassed a lot of human expectations in recent years and
even in prior decades, because the Chess Ai Deep Blue
beat Gary Kasparov in nineteen ninety seven, so that's twenty
(01:46:01):
eight years ago, and there has been this chain of
AI breakthroughs essentially that has been quite remarkable. But there
are still, of course questions about where will humans have
an enduring advantage. And I actually delivered a presentation recently
at the VSIM Vanguard Scientific Instruments and Management Conference that
(01:46:24):
I hope to make public soon about my view of
why humans will not be made obsolete anytime soon. So
to those of you watching, stay tuned for that. Now,
we have about eight minutes left in our Virtual Enlightenment Salon,
and I would like to touch on several subjects before
we conclude. So one of them stems from an observation
(01:46:48):
from Rudy Hoffman. He writes, we really are living in
increasingly science fiction realities, and he thought these would be
more fun when he read the sci fi version, but
the reality is a bit different, and he also writes,
welcome to getting to see the Great Filter up close
and personal. Interesting times. And I have been of the
(01:47:10):
view that we are in a great Filter moment or
a period of years that I would consider the Great
Filter from the standpoint of existential risk, as well as
this being a decisive period in human history. The decisions
that are made today, the outcomes that are achieved today,
are going to shape the future of our civilization for
(01:47:34):
thousands of years to come, if not more, and might
even determine whether our civilization has a future. And it
seems to me, despite all of the fascinating progress of
technology and the new opportunities that it offers, certain fundamental
problems of the human condition have not yet been solved,
and the way that transhumanism promised a solution. Of course,
(01:47:58):
the biggest problem is human more vitality, or at least
the upper limit currently of about one hundred and twenty
two years to human life spans that has not been overcome.
But also in terms of societal problems, we see an
increasing coarsening of public discourse, especially in the United States. Stefan,
(01:48:19):
I'm sure you're aware of the polarization the incivility, the
increasing recourse to political violence that is happening in this country,
and transhumanists generally have presented a hopeful view of humanity's future,
a view where technology and rationality can augment human potential
(01:48:39):
and lead to superior solutions, just like the thinkers of
the Enlightenment have advocated for the use of reason to
solve a lot of these societal problems, and yet public
discourse in particular and mainstream politics seem strangely resistant to this.
And I would like your thoughts, especially from the standpoint
of how you're ro transhumanism can help with that and
(01:49:02):
help to guide humanity toward more rational discourse and more
genuine applications of reasons, science, technology to actually solve problems
that we're facing instead of descending into this polarization and
unfortunately even the violence that we're seeing today wonderful.
Speaker 3 (01:49:22):
That's that's an really amazing question because I'm actually tackling
that issue sort of in the in the later parts
of Eurotranshumanism book, where where I talk about the you know,
cultural elements of of that entire paradigm shift, and also
a lot actually has to do with patentealists and totally
terranism and actually essentialism sort of as claiming something has
(01:49:47):
universal validity. And that's one of the fundamental elements, that's
one of the decisive elements which sort of eurotranshumanism actually
aims to move away from. Actually this weakness of not
claiming one presents something as truths and correspondence to the world.
(01:50:08):
That's actually gain this weakness is really a strength again
rather than a loss. It's also like when you encounter
rather than you know, I do in Prayse sort of
of course personal rights and all that, but when you
when you encounter then a different country and Botswana and Nigeria,
(01:50:28):
and then you say, no, I'm coming, I'm coming at
a white man telling you the truth about the rights,
and it's it's literally it has not this colonial implication.
And that can be, you know, it's it's very it's
very arrogant, it can be it's part realistic, you know.
And then many and if you take the same stand
also concernering China, you know that that's not being taken
(01:50:49):
very well and rightly so and so this has to
do with sort of a set and and this is
this does not mean I'm you know, I'm not. You know,
of course, you know, I'm strong upholding I've got autonomy
and liberalism, but I'm upholding it not as an essentialist truth,
but as as and as good as a gets solution.
(01:51:10):
It's a fiction, but it's a fiction. I am praise
and I think it's a wonderful fiction, and I defend it.
But it's by by taking a more humble stance, by
taking a more self relative advising stance. It actually has.
It has, It has actually some advantageous and implication when
(01:51:30):
it comes sort of to to global encounters, and and
it has also a lot of And so I'm so
in the in the United States we see sort of
the polarization coming about. On the one hand, we have
a string and I mentioned Bannon before sort of everything
which goes along with the very and he he well,
(01:51:52):
he affirms traditionalism like the ideal of Christianity in the
Middle Ages, and a very hierarchical, essentialist, fundamentalist stance. And
so we have a gone extremely not only conservative, really
an absolutist essentialist understanding of religion which is being empraised here.
And on the other hand, we have got the cancel culture,
(01:52:13):
sort of the extreme left wing approaches which actually in
the end, from my perspective, do have a similar kind
of absolutism radicalism in their approach as well. And it is,
it is, it is sort of the it is the
encounter of two absolutes, two to two rigid essentialists. Even
(01:52:37):
though the left is trying to overcome the essentialism, the
in the end actually empraise the same kind of radical
problems which are being accounted in the extreme sort of
religious fundamentalist approaches. And one of and I know that's
not a maybe not a very popular example, but I
want to highlight it net to us sort of just
to to strengthen what I mean. So we've got so
(01:53:03):
we've got the criticism of we've got the criticism. The
traditional conservative religious understanding would basically mean, well, sexual reproduction,
that is Necker law. The eyes are therefore seeing, the
noses therefore smelling, and the channel tills are there for reproduction,
and only if you use them for your reproductive purpose,
(01:53:26):
only if there's a possibility of evering offspring, you should
have sex. And that is you know, that's sort of
the most that's that is still implicit in the Catholic
tradition as well, and many conservatives in pray such and understanding.
And then on the other hand, we have sort of
the cancel culture, which not only in which not only
(01:53:46):
says no as long as as long as competent adults,
which is what I think most transhumans would actually affirm,
not all of them, but you know, the majority of
transhuman as well, as long as we have competent adults
basically agreeing upon something that is the decisive factor for
something being morally and should also be legally legitimate encounters.
(01:54:12):
In contrast, so because it's strongly rooted on the notion
of autonomy, which I share, which which is like widely
upheld in a in a in the transhumanist context, but
I do uphold, by the way, the autonomy as a fiction,
it doesn't exist, but it is. It is something which
I cherished because of its implication. I think autonomy is
a wonderful, wonderful and in the together with negative freedom,
(01:54:36):
wonderful achievement, which I strongly uphold as a you know,
as a central element of the eurotranshumanist approach I'm defending.
In contrast to that, sort of what the cancel culture
upholds is is a is a relational understanding of the
world is a relational understanding, so that in the end
(01:54:56):
it is not autonomy. Autonomy is being undernige always by
relational structures. There is no affirmation. If their autonomy gets afferred,
then it's a relational autonomy at the most. Otherwise, this
relational listen also implies that basically full consent cannot be
given if a strong type of hierarchy is being a
(01:55:20):
strong type of hierarchy is being is present strong type
of hierarchy concerning age, concerning finances, or concerning other type
of authorities, and that's why and like one of the
examples would be basically Woody Allen and his encounter with
with with and and sort of falling in love with Brevin,
(01:55:42):
which is the adopted step daughter of his former partner
to whom he was never married, by the way, and
he basically came together with her when she was as
bo said, when when she was in the first year
of college, so she was a competent adult. Well she
grew up in the family, but like as him being
in the father's role. But in the end and sort
(01:56:04):
of you they were competent adults basically agreeing upon staying together.
Speaker 2 (01:56:09):
And Jefan Unfortunately, we are out of time and I'm
being told that we need to wrap. But this is
a fascinating discussion and I think we should return to
it because the contrast that you mentioned between the various
essentialist strains and American political discourse in particular is one
(01:56:31):
that should be explored further. But in the meantime, thank
you so much for joining us. I think your commentary
in general has been quite fascinating, and hopefully we can
delve into it further and learn from it so we
can all live long and prosper