Episode Transcript
Available transcripts are automatically generated. Complete accuracy is not guaranteed.
Speaker 1 (00:00):
Greetings and welcome to the United States Transhumanist Party Virtual
Enlightenment Salon. My name is Jannati stolierof the Second and
I am the Chairman of the US Transhumanist Party. Here
we hold conversations with some of the world's leading thinkers
in longevity, science, technology, philosophy and politics. Like the philosophers
(00:22):
of the Age of Enlightenment, we aim to connect every
field of human endeavor and arrive at new insights to
achieve longer lives, greater rationality, and the progress of our civilization. Greetings,
ladies and gentlemen, and welcome to our US Transhumanist Party
Virtual Enlightenment Salon of Sunday, June fifteenth, twenty twenty five,
(00:47):
and we are pleased to offer a fascinating conversation today
with the founder of the US Transhumanist Party. Joining us
is our illustrious panel of US Transhumanist Party members, officers,
and advisors, including our Director of Visual Art and current
(01:08):
Vice Chairman, art Ramon Garcia, our director of Scholarship, doctor
Dan Elton, our biotechnology advisor and founder and CEO of
Sierra Sciences, doctor Bill Andrews, and our friend and allied
member David Wood from the UK. He is the founder
of London Futurists and the executive director of LVF, the
(01:29):
Longevity Escape Velocity Foundation. He was also one of the
co founders of Transhumanist Party UK when it existed, which
then was rebranded to Future Surge. And our special guest
today is none other than zolten Ishtvan, who founded the
Transhumanist Party in twenty fourteen. He ran as its first
(01:50):
presidential candidate in twenty sixteen, and after that time he
stepped down from any leadership roles in the Transhumanist Party,
but we've of course continued our communication. He has remained
a valued advisor and he has run for office several
times since then, as a Libertarian for governor of California
in two thousand and eighteen, then as a Republican challenging
(02:15):
Donald Trump for president in twenty twenty and now he
is running for governor of California again in twenty twenty
six as a Democrat. So Zulton, welcome. It's a pleasure
to speak with you again, and I wanted to give
you the opportunity to first share some information about your
(02:36):
campaign and what may you choose to run as a
Democrat in particular, especially considering your prior runs as a transhumanist,
libertarian and Republican. I'm curious if countering some of the
policies of Trump may play a role in this, or
are you just being pragmatic about this in terms of
(02:59):
seeking to maximize your chances with the California electorate.
Speaker 2 (03:04):
Sure well, first, thanks Jenna for having me here. It's
nice to be back and see some familiar faces. And
I apologize that I only have till two pm, so
just about an hour from now. But let me answer
I guess that big eight hundred pound gorilla question, which
is why am I, you know, running on different parties
and have ended up at the Democrats. Let me just say,
(03:26):
you know, first and foremost, my wife works at planned parenthood.
So to have actually run on any party that would
be opposed to planning parenthood being married to my wife
of sixteen years, it would be incredibly difficult. So of course,
when I was running for Republicans and even plenty of
Libertarians now pose abortion, I would have a very strong,
(03:46):
you know, time to do that because of what my
family brings to the table, and my wife particularly, so
you know, coming home to the Democrats is really it
was either here or the Transhumas Party in terms of
those agendas working, just because women's rights is such a
big thing in my personal family. So that's just, you know,
kind of really what's happened. Additionally, it's impossible to run
(04:10):
in California and I think any other party other than
the Democrats, because it's essentially a one party state.
Speaker 3 (04:17):
Now.
Speaker 2 (04:17):
There are lots of variations within the party and the
ideologies and stuff like that within the Democrats in California,
and I'm probably the most right leaning of all the
Democrats running for governor right now. Of course, that still
makes me a centrist in terms of liberal values because
I've sort of always been a centrist. And despite changing
(04:38):
parties here and there, I mean, honestly, my ideas have
changed fifteen to twenty percent maximum over a fifteen year period.
It's not like things have really changed in how I
deal with things. It's just parties. I am true to science,
the scientific method, to technology, to transhumanism, life extension. Those
are the things that matter to me still, and they're
not going to change whether I'm at this party or
(04:59):
that party. It just happens to be that right now,
and running for office it makes the most sense thrown
as the Democrat. And also I think, as I mentioned
the very first thing, just for family orientation, my wife
works at Planned Parenthood as a medical doctor. It just
makes most sense to be here doing that right now.
Speaker 1 (05:20):
Yes, thank you very much, Sultan for that response. And
when you ran in twenty eighteen for California governor, you
ran as a libertarian. Of course, the Libertarian Party has
really changed since that time, and I would say, based
on those changes, the Libertarian Party has left me. So
(05:42):
it's also the case my views from let's say the
middle of the last decade haven't changed that much. And
I supported ultimately the Libertarian candidate Gary Johnson in twenty
sixteen since I couldn't vote for you in Nevada, Zultan,
But I found that the Libertarian Party has really shifted
(06:04):
in a kind of well, a fractured direction, you could say,
or directions. But there are many people in the Libertarian
party who are now anti technology, who are more of
these let's say, back to nature, reactionary luddite types, and
I don't appreciate that. But when you ran for governor
(06:25):
of California, you got fourteen four hundred and sixty two
votes as a libertarian in the open primary, and I
actually think that's a good result for a campaign like
the one you ran. But the reason why I wanted
to bring this up is California has this open primary
system where anybody can essentially participate, and then the top
(06:48):
two winners of that primary, who, as you correctly pointed out,
have essentially been Democrats every single time, are the ones
who advanced to the general election. So I'm curious to
understand if your decision to run as a Democrat is
predicated on the expectation that you might actually get into
(07:10):
the top two that way, whereas running under any other
label you wouldn't or do you not consider that realistic
in terms of how the primary results would turn out.
Speaker 2 (07:22):
Well, you know, just trying to be as honest as
I can, it would be very difficult to get into
the top two. And historically a Republican does get into
the top two just because the state is pretty much
split sixty forty and that might not seem like a
wide margin, but it's almost always the same. It has
been very little breaking. Maybe Arnold Schwarzenegger was the you know,
the last person to actually break through, but right now
(07:44):
it just seems like there's really no chance. And they
had the recall of Gavinus. I mean he Gavin won
by a lot. I don't know about the top two.
A real goal here is to try to get into
the debates, end up in the top five. And you know,
an ideal scenario would be something like Kamala Harris. You know,
a former vice president decides to run. I'm in the
(08:06):
debates with her. I get to ask her some very
hard transhumanist AI questions about the future and make my mark.
There's a possibility that, you know, very good possibly probably
I won't win the gubernatorial race, but this may transition
into a race for the presidency then for me and
(08:28):
my whole team knows that. And you know, we've been
pulling quite a few Andrew Gang people, you know, because
a lot of my policies now are relying more and
more on basic income. And you know, we'll see how
we can do. I'm not sure though, that in California
I'm going to break into the top two. We are
(08:48):
trying our best, and I got to be honest, sometimes
it just takes one single donor with a lot of
money to push you through, and all of a sudden
your team expands by ten times, or maybe a Joe
Rogan interview or something of that nature. We're trying everything
we can. We're getting a lot of traffic. I mean,
we're getting a huge amount of actual social media traffic
and just and articles and stuff, but it's not enough
(09:09):
right now. But let me just say also, we're over
twelve months I believe, you know, close twelve months about
exactly from the primaries, so we're still very early in
the season for a Goobernatorial run. There's not a lot
of news happening, so it's really an open field. But
I think within twelve months, AI and automation and robots
(09:29):
are going to be hitting the job market so severely
that the things that I'm speaking about are going to
resonate and everyone else is going to be left in
the dust. So maybe there is a possibility to really
rise up quickly in a viral moment and take the stage.
Speaker 1 (09:45):
Yes, thank you for that response, and that gave us
very good insight into your strategy. Now, one of our
audience members, who's also a very dedicated contributor to the
Transhumanist Party, Josh Universe, notes that he hopes that your
transhumanist ideals are still similar, and you said essentially your
(10:07):
views have changed, maybe by about ten to fifteen percent.
But I would like you to expand on this, and
in particular, would you be willing to openly embrace and
articulate the ideas of transhumanism and of radical life extension
in the course of your gubernatorial campaign.
Speaker 2 (10:28):
Well, of course. So first of all, I think, funny enough,
life extension is now a mainstream movement. And one of
the reasons life extensions mainstream is because the hundreds of
billions of dollars pouring into the field. If you pour
enough money into the field, all of the sudden, it
becomes mainstream. And the same thing artificial intelligence, which you know,
six eight years ago was still a science fiction concept,
(10:49):
is now one of the most hotly debated topics, probably
far more than climate change at the moment. And so,
you know, ideas that were once quite fringe are now
at the forefront. Maybe transhumanism in itself is still not
that hot or still not that popular to talk about,
but that's postal. You know, that has a lot to
do with the fact that AI is overwhelming the transhumanism movement.
(11:12):
You know, we always considered AI to be a part
of the transhumanism movement, But all of a sudden, it's like,
wait a sec Now that it's here, and now that
we don't have control, we sort of don't have control,
or we may not have control in the next few years.
How do we even deal with such a thing that
we could have its own consciousness and potentially become very
powerful in us. So it is kind of to some
extent left transhumanism in the dust, at least in terms
(11:34):
of media coverage and the public's questioning of it. So
those things are you know, right up there. But I
would absolutely be discussing all things. In fact, everyone knows
my campaign is very much based on artificial intelligence. It's
based on achieving the next stage of life or humanity,
which is a transhuman age, and definitely life extension is
(11:54):
still the forefront. I may not be talking transhumanism that much,
it may not be saying the word, but to be honest,
you know, thankfully the word has continued to grow, but
what's really grown is the concept of the word behind
the word. You don't even need the word on anymore
to understand we are entering an age, you know, with
neuralink and driverless cars, and Crisper that the transhuman age
(12:16):
is essentially is upon us. We're sort of in the
door already. Now we're starting to explore what does that mean.
And so you know, it's kind of like environmentalism. You
don't always necessarily say, oh, it's the environment anymore. You
just understand that recycling or taking care of the plan
or not trashing nature is what you do. And so
I think transhumanism is that same thing. Where is the
(12:37):
next level of achievement and prosperity of living better and
longer lives, And so extreme longevity is becoming like the
most natural thing you talk about it in terms of
this is what you do. In fact, most people when
they talk about extreme longevity, a're just talking about in
terms of business and commerce and capitalism, which is wonderful
because before it used to be is it possible to
(12:57):
conquer death? And now it's like, forget other's possible. Where
do we put our money to start trying? And that's
a great new beginning. That's something that's new in the
last three two years in the entire movement.
Speaker 1 (13:11):
Yes, thank you for that response, and I think Josh
agrees with you. He says, the ideas and values, applicational
and philosophical of transhumanism have exploded, even though the term,
I think needs a lot more circulation, and that's what
we're trying to achieve in the US Transhumanist Party. Josh
also says he agrees that life extension is definitely mainstream.
(13:34):
And I want to highlight another comment from Daniel Tweet,
who was our twenty twenty four vice presidential candidate. He
would love to see you on any debate stage, Zultan,
and I think I would echo that sentiment. I really
appreciated you coming to Chicago in March of twenty twenty
to participate in the debate of eighteen presidential candidates from
(13:57):
eight different political parties. I think you may great points
in that debate when you challenged Donald Trump for the
Republican nomination. And I am also curious since twenty twenty,
we've had a change in the US Transhumanist Party constitution.
Prior to that time, we were only able to endorse
candidates who either ran as independent or explicitly ran under
(14:23):
the Transhumanist Party banner. Since that time, we've expanded our
candidate eligibility criteria for endorsement, so hypothetically we could endorse
somebody running as a Democrat or a Republican, or a
libertarian or a Green if that person either explicitly advocates
(14:44):
for life extension or transhumanism in their campaign, or if
they're willing to openly publicize their endorsement by the Transhumanist Party.
And right now, the question that I would have for
you is, hypothetically, would you be open to such an
endorsement with essentially that kind of understanding? And it seems
(15:10):
like you would be speaking about these subjects to some
significant extent. Your rhetoric may vary, but you would be
touching on the main themes. So if we were to
hold a vote on the question of endorsing your candidacy,
and I have no guarantees about how that vote would
turn out, by the way, but would you be open
(15:32):
to the prospect of that happening.
Speaker 2 (15:35):
Yeah? Absolutely, I would welcome the endorsement of the Transhumist
Party naturally, And I'm pretty much welcome, you know, well
most any endorsement, as well as it's staying within the
you know, the the perceived lanes, but you know, and
to just see that everyone knows I'm constantly tagged with
founding the Transformist Party as none of you know. You know,
recently I spoke at the United Kingdom's Parliament and in
(15:57):
front of me it had a Transumist Party thick that
was what they put me as. So and there's news
still following me, you know, basically every week. So of
course I welcome that, and I still really enjoy everything
the Transhumist Party is doing and standing for. It's just
as I've gotten older, I would like to win. That's
I think one of the things. And as I turned
(16:19):
fifty recently, I have to be realistic that if you're
not part of the Democrats or the Republicans, for better
or worse, you are not going to win a serie selection.
And that's been a very tough thing to swallow, but
you know, one tries to be realistic. But I absolutely
support third parties. I absolutely support the Transhumist Party what
(16:40):
it's doing. I follow it closely, and then, as you know,
i'm an advisor on it. So of course i'd welcome
an endorsement, and I hope, you know, the transmist Party
continues to grow and make its influence. In fact, to me,
nothing is even more important than having third parties influence.
Right now, because I feel like the two parties have
become such kind of monsters at each other more than ever.
(17:01):
We need a little bit of the outside power to
try to influence them.
Speaker 1 (17:04):
You know.
Speaker 2 (17:04):
One of the reasons I'm running is because I feel
the Democrats have done an absolutely horrible job. And you thought,
in the national elections last year, you know, Trump won
by a large margin, and many people think, well, how
could Trump win? You know, he comes across as somebody
who you know, you would think shouldn't be president for
many reasons. And the problem is that the Democrats have
(17:26):
been acting worse. You know, I say this honestly and openly.
You know, people are saying Trump's fascist. Well, I live
in a fascist state. If you're a real estate developer,
as I am, and you try to build buildings, it's
almost impossible to do so in California. So when they
talk about fascism, when they talk about regulations that make
it so that people can't even run their businesses, I
(17:46):
am a direct victim of that. I have seen my
business do nothing in five years, and Gavin Newsoon for
eight years, has made this state worse and worse and worse.
So don't in any way think that I'm your traditional Democrat.
I am somebody who wants to completely reform the Democrat
Party and make it into something that maybe what it
used to be, an old school thing that had liberal values,
(18:08):
a centrist kind of position. Somehow it's become identity politics.
You're taking orders from the one percent, and you're fighting
all these incredible things, and the majority of the citizens
in California are are stuck like me. They can't grow businesses,
inequality is growing dramatically. People are poorer than they've ever been.
And yes, oh maybe the state itself is doing good,
(18:30):
but that has in terms of monetary things. But that
has a couple really just to do with a couple
valuations of major AI companies, has nothing to do with
the forty million residents here who are looking around saying,
my god, inflations out of control, I can't afford housing.
There's water issues all over, there's wildfires all over, there's
riots all over LA. I mean, people are celebrating throwing
(18:52):
their arms up against Trump. What about actually celebrating having
a normal society just operate like a normal way. And again,
I'm not saying I'm not against Trump. I was in
the No Kings protest yesterday. But there's something far deeper
than the Democrat the Republicans fighting each other. It's what
is actually valuable for the citizenry, what is valuable for
the greater good of the people. And that's really what
(19:14):
I'm trying to do, is I'm trying to reform a
lot of the Democratic views that are out there, to
make them just basically more centrist and liberal to what
they were, so that we could actually move forward and
everyone can have a more prosperous life. I haven't seen
much prosperity for many Democrats or Republicans for a long time.
Speaker 1 (19:32):
Yes, indeed, thank you for that response, and does art
romon rights and the comments. We're going to have to
assimilate the Democratic Party because clearly the orthodoxy of the
Democratic Party is not in accord with transhumanist values or
even the values of somebody who just wants reasonable human progress.
(19:53):
Of course, the Transhumanist Party is very much a pro development,
pro building party, and we have no just how difficult
it is to construct new housing in California. Actually, you
mentioned driverless cars. I took some rides in Waimo autonomous
vehicles when I visited Los Angeles. Fortunately, I was there
(20:16):
just a few days before the riots ignited, and there
was this disgraceful spree of destruction aimed against the WEIMO vehicles.
But I experienced those vehicles firsthand, and I think it's
wonderful technology. I would trust it over the vast majority
of human drivers. But driving through some of the let's
call them suburbs of Los Angeles, I saw this stark
(20:39):
contrast between the high tech, futuristic environment in which I
was sitting and experiencing the technology and just some of
the destitution that I saw, the homelessness, the dilapidated infrastructure.
So you have your work cut out for you in
terms of persuading the Democrats to pursue a different course,
(21:01):
and if you do happen to win as governor, to
turn that situation around. But that's definitely aligned with a
tech friendly and techno optimistic vision, because I do think
we can build our way out of this crisis. So
reading through your platform on the campaign website, I think
there is a lot of alignment between the policies that
(21:24):
you propose and the US transhumanist parties platform and values.
But there is one issue where there is a rather
significant divergence and that's in regard to perhaps not even
some of your specific recommendations on AI, but the general
tone of your campaign rhetoric. So you of course raise
(21:49):
quite an alarm about AI and automation creating massive job
displacement and other issues as well. Now, the USTP, in
its platform us take a much more positive view of AI.
Section twenty one hundred and twenty of the USTP platform
was adopted in just March of this year, and while
(22:11):
it does recognize some potential pitfalls and concerns to look
out for, like the potential for algorithmic bias and unfair discrimination,
we do state that the United States Transhumanist Party supports
approaching technologies of artificial intelligence primarily as opportunities rather than
as threats. And when we delve more deeply into your
(22:34):
specific policy proposals like UBI or enabling each household to
have a robot that can do the chores, there is
in fact much that would resonate with a more techno
optimistic view and be consistent with the USTP platform. But
that initial rhetorical angle is very different. And what I'm
wondering is why did you apparently choose to start with
(22:57):
a message of fear rather than one of whope and opportunity,
especially given that there is so much AI related fear
and even what I would consider AI domerism, the thinking
of someone like Eliezer Yukowski who thinks it's inevitable basically
that AI will wipe us out this decade, which I
clearly don't agree with, and nor does the USTP. And
(23:17):
you seem to have a more nuanced stance. But why
do you start with essentially rhetoric of countering AI or
the threat of AI, rather than what AI could help
us to do and to build.
Speaker 2 (23:30):
Sure, let me just start by saying that whatever you
read in major media is absolutely not reliable whatsoever. So
let me take you back to a keynote that I
gave at Florida Atlantic University where I met a ton
of AI people, engineers and whatnot. And that's really where
my downward spiral began. First Off, I know you guys,
(23:54):
I've been following you online. I know nobody thinks AGI
is here. I can tell you from talking with engineers.
AGI is being worked on in laboratories twenty miles away
from me right now, in the exact same way that
people are test driving the twenty twenty seven x five
BMW's they are working on AGI right now, meaning that
(24:17):
AGI already exists, it's just not something you can be
released because of all the problems it causes. Now, then
I've coalked to a bunch of other events, talked to
Ben Gertzel, you know, I debated him publicly at the
firehouse in San Francisco. And then last weekend, I think
it was last Sunday, I went to Max Novastern's house.
Now Max Novstern is one of the partners of Sam Altman.
(24:40):
Sam Maltman, of course is in charge of chat GGT.
And you know, there was one hundred eight AI engineers there.
Not one person disagreed with the actual statement that I'm
saying now that we are already working on AHI now.
So just to understand, like the way the car market works,
twenty twenty six BMW's or Audi's whatever car, they're already
in the production line right now. They're being made, They've
(25:02):
already been test driven and all the kings are working in.
They're selling the twenty twenty five, but that was like
developed two years back, so they're actually working on twenty
twenty seven's test driving them around tracks, looking at them,
looking at the kings. We are already working on an
AI that's as smart as us, which means in twenty
four months we'll be working on AIS that are probably
(25:23):
super intelligent or getting close to that point, and it's
game over. That's actually why I bought a two thousand
and one jeep literally about three months ago, because I
am worried that any moment the cars can stop on
the freeways. Today it's no longer oh, this can happen
in six months. In fact, I have two friends who
have literally left the country weekly to go build bunkers,
(25:44):
one in Thailand and one I can't even say where.
People that are on the ground, that are working at Google,
at OpenAI and at these places are actually taking major
precautions today because they already know what's happening. And so
when I want to tell you again, I know I've
been reading that nobody thinks AGI is coming that soon.
It's not whether it's coming, it's I've talked to engineers,
(26:06):
it's here. It's really just a matter of these things
can't be released for capitalistic reasons, for pr reasons, for
being politically correct. So I am absolutely warning people. I
am absolutely trying to tell people that when Zolty buys
a jeep, a two thousand and one jeep that the
AI can't control, just in case the AI accidentally sets
(26:27):
off the shuts down the Internet or shuts down the water.
And again, this AI may not do it purposely. It
may be like the paper clip theory. You might have
a benign AI that does all sorts of things. It
doesn't have to be as smart as us to destroy humanity.
It doesn't even have to be nice or evil. It
can do it for a variety of reasons. But the people,
the engineers that I'm talking to, that person one hundred
person party at Maximumenstrm's house recently that Wired magazine coverage,
(26:51):
there was no nobody had an idea on how to
create an alignment that could be positive for the human race.
The only thing you could actually try to do, but
Dan Fagello is actually doing, which was trying to create
a worthy successor, meaning something that's actually useful in terms
of an evolutionary purpose beyond humanity, because there is a
good chance that we will be completely wiped out or
(27:13):
we're just going to be sent to things. In fact,
a lot of us at the party were saying it
would be wiser to have a nuclear war right now
to stop progress for human civilization from a game theory perspective,
because at least in an all out nuclear war, pockets
of human beings would survive, Whereas if you get to
superintelligence and it decides it doesn't like human beings, it
(27:37):
could easily wipe out every single last human being in
a superintelligence. So we're at the biggest crux in the
crossroads that I've ever seen of humanity. I have no
idea what to do. I have two young daughters. Do
they go to college? That's a joke to me at
this point. Do they go to college? I mean, it's
almost impossible to talk about longevity and transhumanism and even
(27:58):
my campaign anymore without warning people that we are suffering
an amazing time with AI. We can't really stop developing
it because then the Chinese will develop, and then that's
even worse because then we have a Chinese super intelligent
AI that destroys us. So we're stuck in this incredibly
weird drowning tide pool where I think it's like a
whirldpool where we're kind of wondering.
Speaker 1 (28:20):
What to do.
Speaker 2 (28:20):
But I can tell you this party I went to
last Sunday at this billionaire's house, thirty million dollar mansion
overlooking the Golden gate Bridge on the cliffs. Nobody. And
there are hundreds of exports. You know AI people, You
all know them. They're all all the time in the
magazines and stuff. No one had an answer. It was
less like we're on the Titan.
Speaker 1 (28:41):
Interesting. Well, thank you for that response and for giving
us some inside information about attitudes. I am deeply alarmed
by an attitude that does seem to be increasingly prevalent
in elite circles that nuclear war is less of a
risk than AI or artificial superintelligence. I think quite the opposite.
Nuclear war is not only the greatest existential risk today,
(29:04):
but it is the greatest existential risk that the human
species will ever face. And in fact, we are living
through the most dangerous time in all history today because
of the geopolitical conflicts that threaten to erupt in nuclear war.
And if we can find any sort of technological or
moral or diplomatic solution to those conflicts, then we will
(29:26):
never be at this much risk as a species again.
But I do think the difference also is in terms
of the time frames for AGI or artificial superintelligence, So
I clearly think those are possible, and I think Ray
Kurtzweil's time frames for an artificial superintelligence by twenty forty
five are fairly realistic. But in terms of what is AGI,
(29:49):
my general impression, based on how large language models work,
is that there is exactly zero probability that a large
language model could become a true artificial general intelligence in
the sense of being able to cross multiple domains, like
a self driving car learning how to play chess, for example.
And you and I were on a Freedom Fest panel
(30:10):
in Las Vegas in twenty seventeen. One of the other
panelists was quite an air udite scientist who was one
of the people who coined the term AGI in conjunction
with Ben Gertzel Peter Voss, and we had a Virtual
Enlightenment salong with Peter Voss on August fourth of twenty
twenty four, in which he explained, essentially, there is no
(30:31):
way in his view that the current trajectory of popular
AI models, the large language models, the generative AIS could
lead to AGI. We would need a different technology, a
different paradigm. And I do happen to agree with him.
I know there are others who don't, but I just
wanted to illustrate essentially the contrast in viewpoints there. But
(30:56):
that's for our members to consider. I do also want
to ask you, since this has been a subject of
great interest among our members, what are some of your
views on issues like making housing more affordable and also
(31:16):
in an era of increasing automation where it's possible that
some people may lose their jobs. You've been an advocate
of universal basic income, but you stated that your views
on this have evolved. So we have in the US
Transhumanist Party the concept of the federal Land dividend, which
(31:37):
you essentially originated, and we have adopted it as part
of our platform. The idea is to lease out unused
federal land with the exception of national parks and national monuments,
and allow private corporations to utilize it but pay rent
to the federal government, which then gets used as a
basic income. But you stated more recently that your views
(31:59):
have brought that you would be open to a wider
range of basic income options or approaches, So if you
could talk about that as well as since you mentioned
Andrew Yang, the Civic Party coalition is interested and whether
you'd be willing to align with Andrew Yang's Forward Party
on issues like UBI and also ranked choice voting, which
(32:19):
the Transhumanist Party of course supports.
Speaker 4 (32:23):
Well.
Speaker 2 (32:23):
First of all, I support ranked choice voting and I
always have, and I think that makes a lot of
sense because it does even it out. And whether I
would be willing to you know, cozy up with the
Forward Party, you know, we'll see. Right now, it's just
like again, for practical reasons, we're trying to actually win
or make a difference and be a top candidate, so
(32:43):
we're just kind of sticking with whatever the existing game
is anymore. I've been independent or you know, trying to
do alternative things for a long time. But let me
just say, you know, first off, there's a lot of
different ways that UBI can work. And I think beyond
UBI now is what's happening is the threat of AI
and the reality of AI happening. You know, this is
(33:03):
something that's brand new in the last two three years.
Is moving me to the left. I think anytime there's
actually a giant war or world war, a lot of
people move closer, not necessarily to the left, but they
expect the government to be more a part of their
lives and helping to come up with a solution. And
I think AI is going to be moving everybody that
(33:24):
direction as we come to this realization. And of course
workers are going to lose their jobs. I feel the
majority of workers will probably lose their jobs in the
next five to ten years, and certainly it could be
a lot sooner because if you look at the factory
imprints that are being built right now in China and
the United States, I mean, there are millions of humanoid
(33:44):
robots being planned to come out by next year sometime,
and already some will start hitting the market this year.
If you know, by twenty twenty seven, it could be
tens of millions, maybe billions. I mean, it's hard to
know exactly how much they can produce, but clearly if
they can produce five hundred million humanoid robots that can
are as using these types of generative ais that I've
(34:06):
been speaking out that are being worked in laboratories right now,
ones that are as smart as us and soon probably smarter.
And there's agile as Olympians, which they soon will be.
If you've been watching you know how these robotic videos
are going. There's really nothing that humans can do better
and or be smarter. So we're going to be completely
obsolete in terms of working. So it would be up
to the government to step in or maybe the kindness
(34:30):
of CEOs to say, okay, let's keep the humans around,
because but you know, that's probably not going to happen
in terms of government or kind of CEOs. Probably what's
going to happen is government's going to say we're entering
a new agent. So this is, you know, the biggest platform.
I just released it today, if everyone wants to go
check it out. We have something called the automated Abundance Economy,
(34:50):
and that's the biggest kind of policy change I think
that I have had out in a lot of years.
And while it's not a giant change, it just coaleates
kind of a lot of different ideas to get other
including what you mentioned earlier. One household, one robot. You know,
a lot of us spend a lot of time doing chores, cleaning,
mowing here, law and laundry that it takes about ten
hours a week. If you can have every robot in
(35:11):
every household that does that, let's say by the end
of next year, within the next three or four years,
everybody will have a lot more time. It's just not
just you know, working. The automated abundance economy is really
focusing on automating everything so that we kind of get
to this much more luxurious existence. But I want to
be very clear that automated abundance economy, my version of it,
(35:33):
is very different than the automated luxury communism, which is
the other kind of angle of this type of automated world.
Automated abundance economy, what I'm supporting would still support capitalism,
still support investments. It would just mean that there'd be
some people that get really rich off these automations, but
everyone else has sort of a basic income, has automation
(35:54):
that takes care of their lives, housing, you know, medicine, healthcare,
things like that. You would have basically everything covered. So
if you wanted to work, you could, If you wanted
to make money, you could. So I still think, you know,
our automated about this economy incorporates capitalism, but promises that
there will be dividends from a United from a universal
based income. Now, the universal BASEDK income is this question
(36:15):
started could be based on my federal land dividend. I
would love that, but I'm no longer so stringing about this.
If we need to raise taxes, we can do so.
If we need to deal with crypto in a way
that incorporates it, let's do that. If we need to,
you know, give dividends from the companies, like Bill Gates
is saying, tax the AI companies, tax the robot companies
(36:37):
to pay the universal based income. If it's something like that,
I would support it. At this point. As someone running
for office, I'm not concerned with how we take care
of people that lose their jobs automation. I'm concerned that
we do so as a top priority. Now that doesn't
mean I want to think the capitalistic economy. There's got
to be a better way than just you know, it's
one or the other. But I think it's a necessity
(36:59):
because you're going to have millions of people losing their
jobs here in the next few years. They're going to
raise up their pitchforks and they will turn the world
upside down. They will ride and demonstrate, and you're seeing
all the chaos already. If you think this is chaotic,
this is nothing compared to what happens when millions of
people actually lose their jobs. When they lose their jobs,
they're going to lose them forever. There's no retraining or
(37:19):
this or that. So we need to make sure that
people change their viewpoint on what it means to be
a working system. So part of the other thing, the
kind of third component of the automated abundance economy, is
that you don't value your perception of yourself just by
what kind of work you do. You value just because
you're a human being. But maybe you're creative, maybe you're
(37:39):
an artist. Either way, you have a basic income that
takes care of things. Hopefully you have a robot in
your household that makes your chores and your lives easier.
So you have much more of a life of leisure
than you've ever had before. What are you going to
do with that, Well, it's not for me to say
what people will do with that, but the automated abundance
economy incorporates a lot of these ideas into a world
where people have a lot more time to do the
(38:00):
things exactly that they want. They may not be as
rich the American dream and capitalism, though some of those
things may not work for them anymore, but they'll have
a lot of freedom to develop themselves. Community church if
that's what they like. Whatever it is that people want
to do, they can go down that road and follow
(38:21):
it to their passion. And so that's really what the
automated a mind's economy is. But I just you know,
to finally answer your question for me, it's no, it's
no longer about how we pay for a basic income.
It's that we're coming to the goal line. We're coming
to the end of the game. We need to establish it.
And now regardless, yes, thank you for your response.
Speaker 1 (38:38):
And as a follow up to that, Josh Universe wonders,
do you have any position on UBC or universal basic computes?
So the idea would be to give a certain amount
of computing resources to everybody, especially in the age of AI,
so people can have their own AI agents, so that
AI isn't just something that's applied to them by large corporations,
(39:01):
but they can have the ability to deploy themselves.
Speaker 2 (39:04):
Well, you know, it's an interesting idea, and I think
something like that would come out, but it's kind of
like the word transhumanism. I think universal based SIC income
is established enough. Also, the idea of one household, one robot.
This is going to work, but we have to be
careful about too many weird things, because it just these
are major parties, the Democrats and Republicans. As you know
(39:24):
from when I ran as a Republican, they're very difficult
to actually get into and change. And maybe I got
six in the primaries in the US presdential race against
Trump for the Republicans, But whether I made much of
an impact, that's a bigger question. The question really is
is we need to stick with some basics. My campaign's
trying to be very basic, universal based sk income. Let's
(39:45):
give everyone a robots so they work less, and let's
also take care of the basics. You know, crime is
increasing in California. We have riots and things like that,
or just you know, people breaking into Apple stores and
taking phones by the dozens. I mean, we have to
get stuff like that out of control. This giant train project, supertrain,
you know, fast train that's costing hundreds of billions of
(40:05):
dollars going nowhere. Stop that stuff, stop it now. We
need housing for people. We need to deregulate the environment,
and there's nothing to do with Republicans. Are Democrats just
deregulate things so people can actually build houses and have
a place to leave Otherwise they're all just going to
lead to say, we have file wildfire issues. You know,
at least their technology AI spotting from satellites and stuff
(40:25):
can really help out. But the point of the story
is it's just too complex, I think, And I think
one of the things is a lot of my campaigns
in the past have been sort of science fiction based,
but I'm really trying to keep this one down to
the ground. Listen, let's get a robot in every household
over the next five years, and let's try to get
you a base income. That based income might in the
beginning not be enough to feed you fully or to
(40:47):
house you fully. But if I can get tw hundred
fifty dollars year pocket, if I can get five hundred
dollars in your pocket every month, I'm gonna do that.
And this is a start. Because I assure you just watch,
you're already seeing Microsoft, like Google, all these mass layoffs happening.
They may say, oh, you know, the business environment, true,
this is bullshit. AI is starting to completely take over
(41:08):
the market and the industries across the United States and
the world, and very quickly, those who don't have like
the most developed skill set are going to be replaced
and even those with the developed skill set are going
to be replaced in the next twenty four thirty six
months unless somehow government steps in with some kind of program.
Speaker 1 (41:27):
Yes, thank you for that response, and I do think
there are some differences among our members as to the
timeframe for automation creating net displacement of jobs, though I
do think in the near term there could be some
turnover in jobs as some prior occupations are made obsolete.
But it's also possible that AI could create new jobs
(41:51):
people who work with AIS, people who are prompt engineers,
and other areas of work and indeed meanings it leads
to capitalistic exchanges that are created by AI. But with
all of that aside, we have some interesting feedback. Rudy
Hoffman says he's glad that you're not virulently anti capitalist,
(42:13):
and indeed neither is the USTP. The USTP has never
been an anti capitalist party. And then we also have
our Vermon's comment, can my house cleaning robot manage my
portfolio and make me rich? To which I respond in
that case, the house cleaning robot might actually be an
(42:34):
AGI because it could jump across domains. And Josh points
out what you're referring to is often called the meaning economy,
where people no longer associate their identities with specific jobs
they do. They find a broader sense of meaning in
their lives, and I think that's very important for people
to re envision themselves as whole individuals and not some
(42:56):
particular task that they do for a corporation. Mike Lazine writes,
the Republicans and Democrats are stuck in a box and
they have a hard time breaking out of it. It
seems your strategy is to kind of enter the Democratic
box and try to push it out or push the
Democrats out of the box. And I hope you can
succeed with that, at least to a significant extent. Now,
(43:20):
given that we have about twenty minutes for your appearance,
I would like to ask the other panelists if they
have specific questions for Sultan. Who would like to begin
Let's see our promone, David or Bill, any questions that
(43:41):
you have based on what you've heard so far. David,
please go ahead and then are promont Let's see David,
I think we need to unneed you there.
Speaker 4 (43:53):
Fantastic and Sultan, I want and if you've any more
insights from that meeting that Dan Fagella organized recent thing
that you attended, because I wish I was there, and
it's one of the times I regret not being close
to Silicon Valley. But it did seem there was a
lot of innovative, creative thinking, even though there were all
solutions that could be widely accepted.
Speaker 2 (44:16):
Yeah. Yeah, well the Wired. First off, David, nice to
see again. Wired did an pretty fun ride up. It
wasn't as spicy as my ride up would have been.
But let me just say, first off, this was a crazy,
crazy thing. Wish it had been live stream to the
whole world. So first off, they you know, they invited
kind of top AI engineers and experts and writers to this.
(44:39):
You know, this Max Noova CERN's house he had just closed.
Estro is a thirty million dollar house, must be like
ten thousand square feet literally on the cliff, and you know,
they had all the server It was like great gaps
the type of party, you know, and some people were
dressed like me, very formally because I hear I am
campaigning governor. Other people weren't like short and torn shirts
(45:01):
and just you know, it's bizarre the kind of mix
of people. And there were a couple speakers, and it
was really about the name of the symposium is called
worthy Successor, and it was the idea that it's probably
too late to align AI. Therefore we need to start
worrying about how to create a worthy successor for the
(45:24):
human race, as opposed to worrying so much about whether
AI can align with the human race. And that's a
very different concept than let's say, well, AI help us.
It's no longer will AI help us, It's more what
can we do to help AI carry on whatever it
means to be alive in the universe, just in case
(45:45):
humans don't make it or something of that nature. And
so you can see the dystopian angle to it, and
yet you can also see, say, wow, that's a very
forward thinking idea. And you know, not everyone necessarily supports
that we won't be able to align, but I can
tell you all three speakers realize the from a game
theory perspective, the implausibility of getting a super intelligence to
(46:08):
consider our own perspectives once it's a super intelligence. And
because that's just simply why would you, why would God
listen to you? Why would we listen to an aunt
or a dog? Let's just say of something of comparison
if we could. And I think that party just illustrated
to me something that I'd already written my newsweek up ed,
which if anyone hasn't read, it's about Ben Gerzel and
(46:31):
I debating in public about whether we owe evolution a right,
we owe something to evolution to give a worthy successor
to the human species, just in case humans don't make it.
And of course that's kind of where I come down
and say it's time one hundred percent to stop the
development of AI period, whatever it takes, and you know,
(46:52):
and then that's that's game theory. A lot of other
academics are starting to support that. If you see World
War three breakout I ran in Israel, none of us
see it anymore as this has something to do with
Zionism or whatever. This now has to do with end
of the world AI tactics and game theory. And while
that may sound like a conspiracy theory, I assure you
nobody can look. You know, this is some of the
(47:14):
things that came out of my work at Oxford. Nobody
can anymore examine these truths and these ideas we're seeing
and say, wait a sec we're just going to have
to go down with the Titanic. There must be a
way to stop this situation. And I do not like
the outcomes. I do not like any part of this.
What a terrible time in history actually to be born,
(47:35):
and yet what a glorious time in terms of how
exciting it is. But I am now very fearful for
my daughter. So anyways, David, just to answer your question,
that gives you a little bit more perspective. And what
I'm talking about now is I'm talking about a lot
of the conversation that happened at the party that has
been on people's minds, and nothing positive came out of
that party except we all had a lot of good drinks.
Speaker 4 (47:56):
Just one quick follow up, we try a human issues
to say the successor to humans will be transhumanists or
even posthumanists. In other words, we will be evolving sort
of in parallel with the technology. And so yeah, it's
foolish to think that we humans will continue in this
current broadly similar state except a little bit smarter. Perhaps,
(48:18):
So was the discussion of these transhumanist possibilities. After all,
some people are saying, talk to Nate Suarez recently. He's
the executive director, or rather the president of Maria, the
Machine Intelligence Research Institute. He says he wants to progress
with AI, to stop long enough so that we humans
can get a heck of a lot smarter first, and
then we can start AI getting smarter again from a
(48:39):
transhumanist point of view rather than from our narrow human
limited perspective.
Speaker 2 (48:44):
Listen, that would be brilliant. The thing that I would
suggest is, you know, I was always optimistic about AI
because I thought we were going to go hand in hand.
AI was super AI intelligent AI was going to launch,
but we would have our brain and plants in and
we would be one with it, merged with it. That
was my brilliant TRANSHUMI in future. That way, it's not
like AI is going off separately. But what's happened is
(49:04):
the AI field has developed so incredibly quickly, and the
neurolink or brain and planner brain, you know, uploading our
conscious has developed very or not slowly, but just not
nearly as quickly. So these two fields that were going
up like this are now like this, and we need
to slow down AGI development and superintelligent AI development until
we can catch up and we could be one and
(49:24):
the same as that super intelligent going forward, because what's
going to happen is the Singularity is going to leave
us completely behind and may destroy us in the meantime.
So you know, that's the sad state of things. The problem, though,
of course, as we mentioned earlier, there's geopolitical concerns. How
can we get every AI engineer on the planet to
actually stop developing it without having some kind of authoritarian
(49:45):
control over them? And the answer is no one has
come up with a solution at this point except to
having a global dictatorship that actually that absolutely puts their
foot down on everyone's throat who's trying to develop AI.
But even then you'd still have black market people who
might try to develop an AG or super AI. So
right now it seems impossible to even know what to
do both from a strategic perspective, right now, it seems
(50:08):
like we're stuck on the Titanic with no way to
avoid that uh, that that iceberg. But you know, I
do agree if we could stop the development of AI
for a few years and let it catch up so
we can upload our brains and things like this and
actually embrace this transhuman future that all of us had
dreamed about, that would be the best way forward, But
(50:29):
unfortunately that's just not how how how the cards have
played out so far.
Speaker 1 (50:35):
Interesting. Thank you for your responses on Josh Universe rights.
There is no stopping the development of AI, John H. Wrights.
AI development will not be stopped, just like the development
of weapons tech never stopped. So it seems that there
ought to be some different solution to the concerns that
(50:56):
are raised. Of course, Ray Kurtzweild's view is indeed a
view that David has articulated that there would be some
sort of transhuman merger between humans and AI, and you
also articulated that view earlier on in your activism, and
it is interesting to think about for the viewers of
(51:20):
this salon what led you to essentially embrace this view
that AI is racing so fast ahead that this merger
might be derailed unless that pace of progress is slowed. Now,
in the interests of time, I want to pose a
brief question from our legislative director, Jason Geringer, and then
(51:45):
let's go to Art Ramon and after that to Bill Andrews.
I do think this is a brief question, especially given
your appearance at the No King's rally yesterday. Jason Wonder
result and are you going to have the courage to
stand up to Donald Trump if you're elective governor of
California or in your future political career, if you challenge
(52:07):
his successor for the presidency.
Speaker 2 (52:11):
Yeah. Absolutely. Look, you know I ran against Trump before
and I will absolutely, especially when it comes to something
like the No Kings rally. This is a no brainer.
I mean, this isn't even really about Trump. This is
just about democracy. If you support democracy, then you don't
want raids coming out of nowhere pulling people off the street.
You have to have some semblance of law and order
(52:33):
and you know, humanity in order to stop this stuff.
So that's really where I come down on these things.
And don't the me wrong. I'm not like some ultra
progressive leftists, not at all. I'm like right in the middle.
I mean Republican Democrats doesn't make a difference to me.
It just happens to be that I believe in a
few core issues. And the one thing I don't believe in,
having been a warzone correspondent, is when you get raids
(52:55):
taking people, separating families out of their homes and who
knows where they end up. I mean, this is something
that cannot be tolerated. You know, it's obviously it gets
complicated if they're criminals and this and that, But we
can't have even really one mistake of an innocent person
being taken off the streets by people in masks and
(53:15):
law enforcement. That's when you cross the line too dictatorship.
So I absolutely am against those things, and that's why
is that at that rally, I think Trump. You know,
it sounds like Trump may have actually stopped some of
those things after seeing some of the rallies just in
the last twenty four hours. But let's just hope we
can get back to a country that doesn't do that,
because I completely oppose that.
Speaker 1 (53:35):
Yes, thank you for your response, And Rudy Hoffman likes
your emphasis on due process. He thinks that is good.
Generally centrist verbiage and a lot of people, not just
from the far left, have been quite concerned about the
Trump administration's recent actions and essentially unilateral impositions of its
(53:59):
will contrary to the checks and balances that are inherent
to the American system of government. So thank you for
that response, and now let us go on to our
for moment.
Speaker 5 (54:10):
His question, Uh, yeah, the subject of disclosure you know,
you have these whistle boarders say that there's you know,
non human intelligence out there.
Speaker 6 (54:22):
They're here, they're interfered with our weapon systems because they
don't want us to anute each other.
Speaker 7 (54:27):
And uh yeah, I mean they're here at the time
when AI or a g I is about to be born.
And suppose even some people say it's it's already there.
The government secret controls an a GI that can practically
see in the future. So uh, I mean, what are
your thoughts on you know, the money that's been funded
into these black projects you know possible you know, development
(54:52):
of advanced craft, breakaway civilization.
Speaker 2 (54:55):
Ain't thought on that, h So, I mean there's there's
been a ton of money put into all sorts of things,
and I think we have to, you know, be well.
First off, let me just say that I am part
of what's happened in the last few years. Let me
let me real quickly give you an anecdote about Oxford,
(55:16):
which I just finished my graduate degree there. So you know,
Nick Boston was one of my professors, as was a
number of others, and we had our AI module and
literally eleven days after spending like you know, twenty classes
on AI, chat GBT was born, and none of our
professors there spoke about this generative AI being even nearly
(55:38):
imminent or nearly about to happen. And here was eleven
days after I finished my last class there, that last
AI class that it was chat GBT came out and
I just I cautioned this as well as we talked
about Kurzweil, Peter Voss, other people, even Ben Girziel, my friends.
I mean, they're all my friends, to be honest. I
cautioned people to understand that a lot of people that
(56:00):
have said stuff, including myself in the past, and a
lot of people that put money into these things, the
field is changing so quickly Oxford, the professors that are
the world's expert did not know that that SHOBT was
about to be born and changed the entire reason we're
giving these classes in the first place. You know, they
(56:21):
didn't realize how fast and forward AI was. I think now,
as we're having this conversation the same thing. I don't
think Kurzweil has his finger anymore on the proper on
the you know, on the tip of where I is.
He is not there in those laboratories, you know, fifteen
floors underneath at the Google building working on this AI
(56:42):
with the kill switch being able to do it. Only
a handful of engineers and a handful of CEOs are
in control of everything at this moment, and don't be
surprised if very soon here you know, we have the
military takeover some of these operations. I'm sure they're already there.
I mean, this is already like so far advanced, I
think from what a lot of people know. So we
(57:02):
talk about the money putting into it, it's very difficult
to know whether it's good, whether it's useful, whether it
should continue. It's capitalism, but capitalism, I feel, largely speaking,
and I don't want to be an anti capitalist, but
capitalism has been great to get us to this point.
But the point moving forward in an age of superintelligence
is not going to need capitalism moving forward. It's actually
going to need something a little bit more cohesive to
(57:27):
humanity surviving and thriving, maybe something more like the Star
Trek era. And I know that sounds like I'm switching
to being a socialist somebody. I'm not. It just happens
to be that I'm switching with a set of circumstances
that are happening to planet Earth, to my wife, my daughters,
to you guys, and I think maybe now we're looking
at needing new structures that can get us to the
(57:48):
next era safely without having the superintelligence that may or
may not like this. And just now, guys know, I
just have a few minutes left. Sorry, I have people
coming to my house. But I would love to answer
the few more questions. I know Bill is still there.
Speaker 1 (58:00):
Yes, let's get a question from Bill, and then we
have one more question from Jason Mallory, who has thought
a bit about these issues, and then we'll let you go, Zoltan.
So Bill, go ahead and ask your question. Oh on you, Bill, Yeah,
go ahead.
Speaker 3 (58:19):
Right, just really short and in a few minutes, I'm
going to be speaking about the rally for Longevity Rights.
I'm just wondering if you you would possibly be available
to be a speaker for that. It's in the Lincoln
Memorial in September of twenty twenty six.
Speaker 2 (58:35):
Yes, I would love to. I'm trying to speak as
everything I can now, Bill, So in fact, I'm just
I was invited to speak the Longevity Forum in Marseilles
in southern France the twenty sixty Longevity Forum. So I'm
excited to do that, so please feel free to email me.
We're trying to do as much as possible, and longevity
is still a huge part of my platform. It just
happens that in the gubernatorial race, I'm trying not to
(58:58):
scare people away. You know that I have been a
little bit outspoken and I have scared people and talk
some wild things. I am actually making a real run
to try to maybe end up in the top five
where I can really make a dent in the conversation.
But longevity is still my heart and soul, especially as
I get older, and I believe me, I thought I
run shorter distances. I'm just not as active, sleepings, more
(59:20):
challenging things like that. So I'm just as involved as
ever in trying to push forward to longevity, and I'm
actually very hopeful that maybe if AI could be end
up being benign, It's something that would just radically transform
the field for us moving forward. The bigger question though,
is can we keep it benign? But you know, if
we could keep it benign, we could guarantee it she
(59:42):
I mean, wow, we would probably be there in ten
years and we could use some of these new technologies
that are going to come out I think just you know,
so anything like that, please invite me.
Speaker 3 (59:50):
Please feel free to just email.
Speaker 2 (59:51):
Me and I'll be happy to talk it over.
Speaker 1 (59:55):
Excellent. Thank you for that response, Sultan, and we will
hear more from Bill's soon about the rally for longevity rights. Meanwhile,
our last question for you comes from our friend Jason Mallory,
who was also a congressional candidate in California. So Jason
wrote a science fiction book called Proxy When Robots Work Better,
(01:00:16):
and this book was released I think about thirteen years ago.
So he had this concept of a civil right enabling
citizens the ability to vote on the use of AI
or automation as well as to utilize their share of
it and receive a funded UBI from the Robot income
tax from the proceeds of the robots replacing humans with automation.
(01:00:39):
Have you come across this policy or others to adapt
our free world economies to AI. So Jason has this
approach for funding a UBI through essentially a tax on
robots and automation, and we had a virtual Enlightenment salon
with him previously on this in twenty twenty two. What
are your thoughts on this idea.
Speaker 2 (01:00:59):
Yeah, no, I mean something like that one hundred percent
could work. And you know, it's quite similar to Bill
Gates's idea, just tax the robots that are taking jobs.
So I would say, you know, if a robot specific
you can prove a robot specifically has taxed your job,
then we need to ask the company that did it,
what are you going to give to the human being
who's gone forever. Now, again, this is not saying that
(01:01:20):
it would be like we're going to force it, but
this is you know, if all of a sudden, a
robotics company ends up worth trillions of dollars and just
ends up firing all its employees, clearly that's not going
to work because what's going to happen is everyone's going
to raise their pitchforks and then go after these CEOs
and go after these companies. So we need a system
that works, and government probably has to step in and
(01:01:41):
try to create this kind of system. And so it
would be better to do it upfront. It would be
better to let these companies know upfront that is something
like this is going to happen. So with Jason suggesting
is just one of many different methods that's a method
that I would definitely suggest, looking into having some quick
studies done and approaching the businesses and saying, what do
(01:02:02):
you think about this. We're all in this together. There's
no sense of pretending that you're going to have multi
trillion dollar companies that completely eliminate all the human employees
and there's nobody's giving anything back. That's not going to
lead to society that is structurally sound. At some point,
there's more people with pitchforks that have lost their jobs
than a CEO can deal with. So at some point,
(01:02:24):
you know, I mean again, this is what mygberatorial campaign's
really about, is trying to find the avenue down this
path peacefully before something bad happens, before people are out
of work and they lose access to medicine and they
have to give up their house and whatever it is.
So I think if we make this transition nice, you know, safe, functional,
then all of a sudden we can cross the bridge
(01:02:45):
together and say, yeah, you know, capitalism has survived, but
so has the idea that people are taken care of. Again,
this is what the automated abundance economy is about. Our
new policy, which is just we can actually get to
a much better world than we have right now, with
a lot less poverty. The problem though, is that we're
going to have to get big corporations to agree to
(01:03:06):
it upfront, and that's the big challenging part. But in general,
I like what Jason's saying.
Speaker 1 (01:03:12):
Yes, thank you very much for that response, Sultan, And
before you go, we'll give you the opportunity to say
any final words to our viewers, anything else you want
them to know about your campaign or what is coming up.
Speaker 2 (01:03:26):
Sure, well, first let me just say, you know my website,
Zultan Eish Fund twenty twenty six, we could use donations.
You know, we have a team of people in place,
and when you're trying to play with the big dogs
in running, you know, for governor, it's about money. I
need money to pay people to do AI generated videos, design,
(01:03:48):
work to get the paraphernalia out there, you know, all
the different bumper secrets. I mean, it's crazy how much
it is just to try to run a real campaign.
In fact, I was just reading on some of the
other gubernatorial campaign candidates and how much money to bring in.
So if any of you feel inclined, please go to
zult on Each Fund twenty twenty six and make a
(01:04:09):
small donation. I also want to say that making small
donations is critical because that's one of the measures that
they have for getting into the debates. The more donors
that you actually have, even if it's just ten dollars,
the better you end up doing. So please consider that.
And second, we did release today a brand new interview
on the automated Abundan's Economy with Team Futurism, which podcast
(01:04:31):
some of you might know, so feel free to listen
to that just to find out more about the ideas
what it means to have, for example, a construction robot
that will build a house, but it already has the
regulations directly built into it, so that way the regulatory
process is nowhere near is challenging because it's not like
you have to get inspectors out there, and things like.
There's a whole bunch about the automated Abundon's economy that
can really challenge the way we perceive capitalism and how
(01:04:57):
to move forward in a new world without actually I guess,
sacrificing capitalism, but still getting people taken care of, having
their basic needs shelter, food, healthcare like that taken care of.
So I would love to encourage you all to just
kind of take a big look at that to sorry
to see if it's something that appeals to you, because
we think it might be a way not only to
move the Democrats forward, but also to move the Republicans
(01:05:17):
forward and move the country forward as a way as
we enter into kind of the AI age where robots
and automation is going to be a huge part of
the picture no matter what happens anymore.
Speaker 1 (01:05:28):
Yes, thank you so much, Sultan, and we will definitely
check out the Team Futurism interview that you mentioned. Here
is the link, so after the salon, everybody please watch that.
I think it will be fascinating to hear more of
your thoughts on the automated abundance economy. And thank you
so much for joining us today for this hour and
(01:05:50):
for fielding the many challenging questions that we have. Hopefully
this will not be the end of our conversation. A
lot of our members do want us to hold the
endorsement vote, so the vote is going to happen at
some time in the near future, probably the next month,
and we appreciate you being so candid with your perspectives.
Speaker 2 (01:06:13):
And thank you. Thank you for having me, and I'm
sorry I have to cut it short, but thank you again.
And honestly, it's really great to see all your faces
to have these questions because I've always loved the Transhumis
party and I love the people there, and we are
trying to move forward to a day where we don't
have to die and we can get to a better
future than we have now.
Speaker 1 (01:06:32):
Yes, thank you so much, Sultan, and we certainly share that. Aim.
I hope that you live long and prosper and your
dog too.
Speaker 2 (01:06:40):
Yeah, my dog's just going crazy right now. Anyways, thank
you everyone. I appreciate he's been quite all the time.
All right, bye you guys, Thank you all.
Speaker 1 (01:06:50):
Right, farewell to Sultan, And again, a lot to think
about from the points that he has raised, and we
hope that we are going to have a lot more
discussion in terms of what points he has shared and
what we think of them. Now, before we delve more
(01:07:12):
into Zolton's remarks and feedback from our panelists and our viewers,
Bill has some comments about the forthcoming rally for longevity
rights that has planned to be held in Washington, DC
and twenty twenty six, So Bill, please go ahead can I.
Speaker 3 (01:07:33):
Share my screen? I mean with a power two power
print slides. Can you hear me? You're muted, You're muted.
Speaker 1 (01:07:44):
Yes, you should be able to share your screen if
you click on present at the bottom panel, then.
Speaker 3 (01:07:51):
The the present was there, but it disappeared because analy
says uploading recordings and you're in the show.
Speaker 1 (01:07:57):
Ah, I think that may be something on your end.
But see if you can email the slides to me,
and in the meantime, please go ahead and explain what
you wanted to discuss with regard to the Rally for
(01:08:18):
Longevity Rights and we'll pull up the slides as soon
as we can.
Speaker 2 (01:08:22):
Okay.
Speaker 3 (01:08:22):
I'm actually just looking for speakers or people that want
to help with it. That's all I wanted to do,
and I wanted I've been giving presentations just most recently
last week in Las Vegas, presentations on the subject. But
I don't plan on giving that whole thing. So I'll
send you the email the PowerPoint, but I'm only going
(01:08:45):
to be showing two slides. But I to present button
just reappeared.
Speaker 2 (01:08:51):
Okay.
Speaker 3 (01:08:51):
I don't know what happened.
Speaker 1 (01:08:52):
Okay, so just go ahead and try to share your
screen again.
Speaker 3 (01:08:56):
Okay.
Speaker 1 (01:09:04):
So Josh wonders, what topics do you need speakers for?
This is for the rally itself, so for people to
come to Washington, d C during the rally and stand
out there and discuss the importance of advancing longevity research
and helping to catalyze breakthroughs that could help reverse human aging.
Speaker 3 (01:09:31):
I was trying to get this show up. Did anybody
see it? Did it work?
Speaker 8 (01:09:36):
No?
Speaker 1 (01:09:37):
Actually, it would show up and then I would have
the ability to elevate it to the main screen. But
I don't see the instance in order to elevate it.
Speaker 3 (01:09:48):
Josh wonders, when is yes, should I upload the file?
Speaker 1 (01:09:55):
You can just share it on your screen, So share
the window that it's in.
Speaker 3 (01:09:59):
I tried that and I didn't and nobody. Nobody could
see it. So I'm going to present again by slides
and PDFs.
Speaker 1 (01:10:07):
If you look on present and share that window that
you have. But in the meantime, Josh wonders, when is
the rally for Longevity Rights. I understand it will be
held in the spring of twenty twenty six, it has
any more specificity.
Speaker 3 (01:10:26):
September twenty twenty six.
Speaker 1 (01:10:28):
September. Now, just okay, that's a hard.
Speaker 3 (01:10:32):
Time thinking about listening to your questions and trying to
get this presentation up at the same time.
Speaker 1 (01:10:37):
Sot me, all right, so maybe while you work at this,
I am not seeing it, But if you email me
the slides, maybe that would be more efficient.
Speaker 3 (01:10:50):
I think the reason I can't do it is because
I can't concentrate on doing it well being asked questions.
So let me just go ahead and email.
Speaker 1 (01:10:57):
To you, so in the meantime while you do that.
David has some remarks about the Oxford AI professors not
understanding the changes in AI, which Sultan mentioned where essentially,
just less than two weeks after he completed his graduate program,
chat GPT came out and made quite a paradigm shift
(01:11:20):
in the AI landscape. So David, what are your insights
about that?
Speaker 4 (01:11:26):
Well, I'm happy to wait for Bill to have his
slide job because.
Speaker 1 (01:11:30):
That's going to take some time. That is going to
take some time, So why don't.
Speaker 4 (01:11:35):
You go ahead, I'll give my general feedback about Salton.
I was impressed. I thought a lot of what he
said was inspiring, a lot of what he said has
the ability to get people behind it. The only thing
he said which made me raise my eyebrows and I
see that Louis Arroya in the chat also raised his eyebrows.
(01:11:57):
There is when he's thought we might be able to
rely on the kindness of CEOs to spread things to
large numbers of people. It's true some CEOs are kind,
but far too often they are driven by other forces.
So that was the one part where I thought, is
a little bit unrealistic. Can we need more of rules
(01:12:18):
to ensure that the successful businesses don't keep large chunks
of their money they've made entirely to themselves, that they
are obliged by society as a whole to share parts
of it with everybody else, because after all, the profits
are dependent on much wider things than just what the
(01:12:41):
CEOs and the companies themselves do. But broadly, I thought
there was a great set of ideas. When Zolten talked
about Oxford professors being out of touch, this resonates with
my view entirely. I was recently sent slides by another
Oxford professor who still insists that we don't need to
worry about AI accelerating, because he says modern AI companies
(01:13:06):
have no idea about all the good ideas that Oxford
professors and other leaders in the I have hired over
the decades. On the contrary, in my view, a lot
of the people who are at the forefront of these
platform companies, they're avidly searching for ideas they can use
to combine with their large language models with some of
the insights of symbolic reasoning or the various earlier methods
(01:13:31):
of machine learning. There is a veritable zoo of innovation
going on, and the fact that these Oxford professors are
ignorant of it is boards badly. But it's not just
Oxford professors who were taken by surprise by what chat
gpt and other AIS could do. Actually that many people
(01:13:52):
inside open ei were shocked by how good chat gipt was,
including the board of open Ei. They were shocked that
their management didn't breathe them in advance about what chat
gpt would do, and the management's answer to the board was, well,
we didn't think it would be that big a thing, frankly. So,
the reality is that AI is developing emergent features, some
(01:14:15):
good emergent features, some bod emergent features, some ways in
which it exceeds our predictions as to what he can do,
in some ways in which it makes shockingly terrible judgments
more shockingly bad than people had expected to In both ways,
AI has got emergent features. So Sultan's entirely right to
think that it is possible that we could see whole
(01:14:37):
new levels of caliber of AI within a short period
of time. It's not certain, But as Nate Sorrey said
to me, I mentioned Avelie. He's the president of Murray.
He said to me, he used to feel confident, he
could tell people, You've probably got about two years to
sort things out. At least you don't need to worry
about superintelligence arriving within two years. Too many things in
(01:15:01):
which more current AIS fall behind general intelligence, he says,
You can no longer feel confident in saying that it
is possible, indeed, that by the end of this year,
there will be sufficient breakthroughs that will leave all of
us in the dust. So something's right that this is
a matter of urgency. Not because we're sure that there's
(01:15:22):
going to be these drastic changes soon, but that is
certainly possible. So when I hear Bill Andrews talking about
an event in twentieth of September twenty twenty six, I
now wonder, well, there's there still going to be a
democratic system in place that far off date, or will
the world be incredibly different by then once upon a
(01:15:44):
time out of thought, Well, of course it's only going
to be slightly different between now in twenty twenty six.
What that's just a year and four months away, No,
a year and three months away, something like that. But
now it's possible there will be these larger changes. So
we're in this credible situation where we've got to plan
and two things. In part, we've got to plan for
a bit of business as usual, hence planning for these
(01:16:06):
conferences a year and three or four months ahead. But
at the same time we've got to be on alert
as never before for early indications of even more drastic changes.
And we absolutely cannot trust what so called self declared
AI experts say is going to happen and not happen
because nobody truly understands the capabilities of these systems. That
(01:16:27):
is the true crisis. Here were developing systems that are
operating beyond anybody's understanding.
Speaker 1 (01:16:35):
Yes, thank you for those remarks, David, and hopefully we
will delve a bit more deeply into them as well.
We do now have some of Bill's slides. He did
email me his presentation. So let's start with slide thirty
eight here, the Rally for Longevity Rights.
Speaker 3 (01:16:56):
Okay, well, I'm just right now announcing I've been giving
like thirty minute plus presentations on this, but I'm just
going to be showing two slides. It's to Rally for
Longevity Rights, which now has been approved. Okay, So can
you advance a slide for me?
Speaker 8 (01:17:14):
Okay?
Speaker 3 (01:17:14):
So it has been approved now march the March on
Washington for September twenty twenty six. Coalition for Radical Life
Extension has gotten the okay to go ahead. And the
three subjects that are going to be part of this
thing or best Choice Medicine, and that's what I've been
speaking on mostly.
Speaker 2 (01:17:32):
That's just a.
Speaker 3 (01:17:35):
Thing where we are trying to revamp the FDA policies
to allow anybody that's suffering from a life threatening disease
that they've been told they have six months to live,
or a disease that's not worth living, they would be
given the okay to try any treatment that they choose,
(01:17:56):
even if it hasn't been tested in animals. Is just
going to allow the doctors to proceed with the treatment
without having to worry about any liability, And it's pretty
much it's called best choice medicine because at this point,
when you're in that kind of condition, the best choice
is the patient's choice. It's also going to be discussing
(01:18:18):
classifying agent as a treatable condition. That was classifying agent
as a disease, it's been changed to treatable condition. And
then also to expedite the approval for longevity therapies. So
those are the things. If anybody's interested in helping with
that or speaking at the event, let me know, and
I will work on trying to move that forward. But
(01:18:41):
right now it's still in its infancy and there's not
a lot I can discuss anymore than what I just did.
That's all I had.
Speaker 1 (01:18:51):
Yes, thank you to Bill. And also here is a
slide from Bill on best choice medicine. Essentially, if the
patient is suffering from a life threatening disease, then best
choice medicine allows for the maximum possible scope of options
(01:19:13):
to treat that disease. And there's a website to best
Choice medicine dot org for you to study that concept
in greater depth. Bill and Liz Parrish have a great
white paper on best Choice Medicine, so please go check
it out at best Choice Medicine dot org. And now
(01:19:34):
let us take a look at some more feedback from
our viewers regarding Zoltan's remarks. So first, on the subject
of the robot tax idea, David wood Wrights, there are
many problems with the details of any specific robot tax,
and he thinks we already have a suitable mechanism, namely
(01:19:56):
a corporation tax set on a sliding scale. Now I wonder, David,
along those lines, do you think it would be practicable
to replace all taxes with a corporate income tax or
a corporate revenue tax that then could suffice to fund
a UBI or other functions of government in order to
(01:20:17):
both streamline the taxation system as well as address some
of the concerns about perhaps automation contributing to job displacement.
Speaker 4 (01:20:28):
I'm not sure about that. I think there probably is
still ale for sales tax of various things, so I
would have a still support for various taxes being set
on various items that are perceived as comparative luxuries. People
want to buy them, they should be prepared to pay
a bit more. And I'm also in favor of b
Googy and taxes that's taxes on things that are lots
(01:20:49):
of negative externalities, so named after Arthur Pegu the Cambridge
economists from I think the nineteen twenties or nineteen thirties
who said, you know, we don't want people to smoke,
we should charge more taxation on cigarettes. So I think
taxation needs to be a little bit more complicated than
just corporation tax. But I think if companies are successful,
(01:21:13):
husually successful, it doesn't just mean that they're entirely responsible
for that success themselves. A lot of their success has
been because they've taken advantage of generally educated people. People
who have been educated people have learned about science. So
that's my advocacy for a corporation tax. But I think
we need to do more than a UBI. What I
(01:21:35):
liked from what was being said by Salton is is
a sort of a vision of a luxury existence. The
drawback of a basic income is it's only a basic income,
and it will leave many people feeling left behind. They
will feel they've got enough for poor quality accommodation, poor
quality education, poor quality health. And that is a recipe
(01:21:55):
for the pitchforks. So rather than talk about a UBI.
Various people including myself to talk about a UGI universal
generous income or a ULI or universal luxury income, and
that may seem absurd, but Sultan's entirely right. It can
be enabled to buy using automation to drive down the costs,
so we could have much cheaper accommodation if we have
(01:22:18):
the kinds of robot building systems that Sultan was talking about.
I mean, can have much cheaper healthcare by the kinds
of things that Bill Andres is talking about now, and
we should probably discuss that more. Rather than waiting until
people have got acute diseases chronic diseases which have been
developed to a terrible state which requires a great deal
(01:22:39):
of healthcare, it's far better if we can address things
earlier in the development of damage, and that will be
much less costly. So with a right approach, we can
deliver good healthcare for a fraction of the costs than
often occurs today. It's the same with education. These virtual
enlightenment salons, of course, are completely free, and I see
(01:23:01):
that Harvard is making more of its courses completely free,
and with AI helping, it'll be the case that more
and more people who want to find out the answer
to summing, they'll be able to interrogate their favorite free
or almost free AI agent. So that's the focus we
should be honest, not how are we going to do
the redistribution, It's how can we build the abundance in
(01:23:22):
the first place, not just build the abundance, but invent
the abundance. How do you get more people studying their
mechanisms of aging and fixing it? And so, Bill, I'd
be interested to hear what is the pushback that you're receiving,
if anything, from what you're advocating. Are people generally saying, yeah,
this is a good idea, let's get behind it, or
do they have any legitimate reservations in your mind that
(01:23:44):
they're raising? Well, this might enable too many cowboys to
advocate false solutions and they scam people into paying their
life savings for treatments that aren't actually that good.
Speaker 3 (01:23:59):
Well, we haven't had any pushback that we haven't already
thought of. Okay, so we've already addressed those things. And
if you if I showed my entire presentation and maybe
maybe gen maybe that's a session for a future enlightenment,
I could show there see how we have we have
addressed practically every issue that anybody could have.
Speaker 4 (01:24:20):
I have not.
Speaker 3 (01:24:21):
I've only had standing ovations when I've given this presentation,
so there's been no and even afterwards, no negative feedback.
Speaker 1 (01:24:31):
Yes, absolutely, let's do it. We have some openings for
our Virtual Enlightenment Salon schedule in August, and I would
be happy to have you present on one of the
Sundays of that month, because Best Choice Medicine would be
an excellent subject to devote a salon to all on
(01:24:54):
its own. In terms of David's comments about needing more
than just a universal basic income, Luisa Royo agrees with that,
and to use another expression of David's, if we do
achieve sustainable super abundance, then it would be possible for
(01:25:16):
every individual human to receive much more than a basic
standard of living. So please everyone check out David's book,
Sustainable Superabundance. And for those of you who want to
listen to our salon with Jason Mallory from October sixteenth,
twenty twenty two on his ideas of essentially attacks on
(01:25:43):
automation that would fund a UBI, please go to the
link shown on the screen, and we did explore that
idea in depth including some challenges, some skeptical remarks and
analyzes that were offered. So it's clearly an ongoing and
multifaceted discussion. Now there is a comment from one of
(01:26:08):
our viewers that I want to highlight because I think
it's quite insightful. Joseph Wolfson writes, could we all be
overly projecting humanity's flaws, including genocidal tendencies, onto AI? Could
such fear mongering become a self fulfilling prophecy? And I
have had many thoughts along the same lines, essentially looking
(01:26:32):
at the outputs of some of the large language models
and how some reporters have written articles saying, oh, well,
we got this large language model to say that it
wants to destroy humanity after a large amount of prompting.
But really, where does it get that information from? Where
does it get that inclination from. It's from all the
(01:26:52):
source material that it processes that was written by humans.
And so if the predominant proportion of source material talks
about the dangers and threats of AI and how AI
could become rogue and seek to destroy humanity, then the
large language models will pick up on that and amplify it,
(01:27:12):
because that's what the responses to the questions regarding the
future of AI have looked like in the human literature. So,
in terms of the predominant massive content about AI being fearful,
could that in itself lead to the AI systems amplifying
(01:27:34):
that fear What are people's thoughts about that? I know
Bill gave a thumbs up to that comment, anybody else,
David or art Ramone.
Speaker 4 (01:27:46):
So I think numerism is a dangerous psychological attitude. If
people feel there is no hope, that drives them into
bad places. And we must not end up in a
bad place. We must take the issues of AI very
seriously because things could go wrong, just as we should
take the issues of nuclear war very seriously. So we
(01:28:06):
should worry about geopolitics going bad. What's going on in
Iran and Israel just now deserves some serious attention. We
would be naive to say, hey, everything's going to turn
out fine. Don't worry, We've turned out fine in the past.
That would be irresponsible. So we need to evaluate these
things seriously, but not from a defeatist standpoint saying oh,
there's nothing we can do, it's all disaster. So I
(01:28:29):
advocate for activism rather than numerism. I think we needed
to be wise to figure out what are the failure
modes and how can we deal with the failure modes?
And the failure modes are partly that these AIS will
be very different from human intelligences. They're sometimes described as
AIS in the sense of alien intelligences. They sometimes get
(01:28:53):
visual recognition completely wrong in baffling ways. We humans look
at it and say, well, this is obviously a pig,
and the AI says, this is obviously an aeroplane. And
you wonder how on earth you can get that wrong.
And it's because it's been very slightly misled by some
additional information that's been put there. And if we don't
understand how these systems work, we can be misled by
(01:29:14):
their alien conclusions. And it may well be there reaching
alien conclusions about morality too. And not only is the
risk that they will actually calculate things wrong, is that
they won't care what we humans actually want in the
same way that we humans we don't care so much
what evolution wants. We were created by evolution with various instincts,
(01:29:35):
which by and large meant we were good at passing
on our genes to the next generation. But in a
changed environment in which we live, many of these instincts
no longer mean that we're great at passing on our
genes to the next generation. We are motivated by different things.
You know, we are motivated, to use another example from
Nate Soaurus, we're motivated to eat lots of Oreo cookies,
(01:29:55):
many of us, even though intellectually we know that's not
good for us.
Speaker 3 (01:29:59):
Why.
Speaker 4 (01:29:59):
It's because they instincts that made sense for us in
neolithic times are no longer the ones that make sense
for us in this new environment. Well, in the same
way with AI, we may train Ais by pointing them
at the entire canon of English literature, in fact, the
great literature from every language in the world, and it
may reach its own conclusions. But in a slightly different environment,
(01:30:20):
it may end up doing things which we humans will say, Hey,
this is foolish, and the Ais will say, well, I
know you think. I know you humans think this is foolish,
but I'm pursuing my own objectives. I've got these objectives
here to make money from my corporation or to ensure
that my military doesn't.
Speaker 2 (01:30:37):
Get defeated, and so we have to be wise.
Speaker 4 (01:30:41):
So it's not just that we're projecting naively the psychological
weaknesses of humans into these systems. There are all kinds
of other failure modes, which needs a lot of attention
from us. Not a panicked attention, not a fear mongering attention,
but very careful, honest, diligent analysis.
Speaker 1 (01:31:00):
Yes, thank you for your response, David, and I do
like the emphasis on the activist approach that we need
to consider potential risks and consider ways to mitigate them
without sliding into doomerism or despair, and that latter perspective
(01:31:22):
has been the source of my critique of Eliezer Yukowski
and his followers, because I have heard comments essentially suggesting
that a lot of people who follow Yukowski have essentially
resigned themselves to an end of the world scenario due
to AI in two years, and they live their lives
(01:31:42):
like there's no tomorrow, And that is of course a
self defeating approach, and if there are enough of them,
it could be a self fulfilling prophecy. Fortunately, I hope
and I expect that there will not be enough people
to embrace that perspective, but definitely being vigilant about risks
(01:32:03):
is important now. Interestingly, enough We have a comment from
Jason Geringer who says, well, llms absolutely fill in the
blanks like that based on the source material that they receive,
and they simply predict the next character. But an AGI
will not be an LLM. It will be a collection
(01:32:23):
of AI types. And certainly Peter Voss also believes that
llms are not a path to AGI. But you, David
have said it's a false binary to contrast lms with
alternative approaches to AI. The most important new platforms combine
LLLM breakthroughs with innovations from these other approaches, for instance,
(01:32:43):
a neurosymbolic approach, so perhaps advances and artificial intelligence. And
the path to where AGI could be from a suite
of models, not a unitary model. So thank you for that.
Any other thoughts about out the self fulfilling prophecy comment
(01:33:04):
Bill or art Ramon.
Speaker 2 (01:33:08):
Yeah, I mostly fear it being weaponized.
Speaker 9 (01:33:13):
I don't think AI is just going to decide to
hate humans because of our science fiction.
Speaker 3 (01:33:19):
Written about it.
Speaker 2 (01:33:20):
I think it's going to be weaponized somehow.
Speaker 9 (01:33:22):
You're going to have a dead man switch, uh, and
it's going to continue on someone builds in a dead
Man switch that it no longer requires the human the loop,
and it just goes on doing some malicious activity.
Speaker 2 (01:33:40):
After it was weaponized. So that's that that would be
my fear.
Speaker 6 (01:33:44):
But yeah, and when I was sort of alluding to
speaking to Zota, I've heard recent reports that the White
House or the US government already has an AGI that's practically, uh,
you know, predict the future. So that's interesting to hear that,
And that makes me wonder why prop this flip flopping
(01:34:08):
on mast deportations if he does have such a AGI
that looks into the future. So maybe that that's a strategy,
flip flopping strategy.
Speaker 1 (01:34:19):
I don't know. So yeah, yes, thank you for those remarks.
And Bill, since you gave a thumbs up to the
comment by Joseph Wolfson. Any thoughts about the self fulfilling
prophecy idea.
Speaker 3 (01:34:35):
Well, it just my general comments about AI and the
future AGI or the present AGI. I mean, I think
that you really get to know AI when you pretend
that you don't know your own specialty and ask AI
for its opinion of your specialty and you find out
that it's actually quite wrong all the time, and it's
(01:34:59):
it relies. So AI is relying mostly on hearsay what's
what's being said in the press. It has no way
of distinguishing right from wrong, and like like I one
of the things I speak about a lot is critical
meta analysis of pure reviewed studies, which means not just
do a meta analysis, but actually go through each paper
(01:35:21):
and look at the experimental design and the logic of
their conclusions and things like that to make opinions. Now
AI AGI is supposed to do that. That's kind of
what AGI is meant to do, to bring in a
human cognitive kind of approach. But I think I think
it's going to be a while before AGI is as
(01:35:42):
smart as humans are. And my biggest fear is that
is that we are going to allow AGI to be
let's say, dumb, and as a result, it's going to
make decisions that will that could possibly eliminate us, especially
if we just let it, And so we have to
(01:36:05):
I think we have to put some type of control
on AGI until we actually have it completely at the
level of human cognitive thinking that and that the reason
I gave the thumbs up on that is because I
see that as as we are just going to let
(01:36:28):
it happen, okay, and we're afraid of it. We're not
going to do anything to block it. Uh, and that's
what we could end up having happen. That's all I got.
Speaker 1 (01:36:39):
Yes, thank you, Bill, And what you said resonates with
me in the sense that I have sometimes asked various
large language models questions about me and my work, and
I've seen them hallucinate specific facts, Like one of them said,
I have a law degree from the University of Nevada Reno,
(01:37:00):
and I have neither a law degree, nor did I
attend the University of Nevada Reno. I mean, I go
there for classical concerts, but that's not the same as
getting a degree from there. It also hallucinated names of
my musical compositions. I asked questions about my music, and
it got some general statements correct, but I had to
(01:37:21):
correct it in terms of specific references, and after I
corrected it, it tended to give more accurate answers. But
one needs to be a subject matter expert on the
topics in question in order to be able to know
that the LM is hallucinating. Otherwise the answer on the
(01:37:42):
surface may seem plausible enough that it could mislead people,
and that is something for everyday users. To be quite
wary of when using generative AI, because one needs to
double check the accuracy of factual statements that are being made. David,
you have a comment along these lines about future House
(01:38:04):
and the work that Alex Javarankov is doing with this
company in Silico medicine.
Speaker 4 (01:38:11):
Yeah, so there are a couple of different examples here.
So first, I entirely agree that standard EL with Bill,
the standard large language models are often frustrating when you
interrogate them on something that you know something about. At first,
it can seem very convincing. It can often give a
number of good answers and you think, wow, this is great,
and then suddenly get something spectacularly wrong. So these standard
(01:38:34):
models are not what we should be relying on. But
there are people already working on getting more reliable surveys
of scientific literature, and one of the most important initiatives
I think is future House. Future Houses received funding from
if I remember Eric Schmidt, the former CEO and then
executive chairman of Google. It's also received money from Open
(01:38:56):
Philanthropy that say, Dustin Muskovich, former co founder of Facebook,
amongst others. So they are coming up with some really
impressive tools to help accelerate the scientific process, which are
less likely to fall into the kind of mistakes that
you'll get from a standard large language model. Then that's
(01:39:17):
what Alexavaronkov is doing. I think he is probably the
most discute user of AI in the field of biochemistry
because he insists that in order to not be misled
by what these ais do, you need to have people
in your organization who are experts in AI, but also
who are experts in biochemistry, and if you lack either,
(01:39:40):
you can be misled. So he has developed what he
calls a zoo of generative models, and will take too
long to explain everything that he's doing. But there is
strong indications that some of the drugs that his ais
have suggested particular solutions for idiopathic pulmonary fibrosis, which is
age related scarring of the lungs, some into which more
(01:40:03):
and more people as the age become a victim.
Speaker 1 (01:40:06):
Too.
Speaker 4 (01:40:07):
He has suggested a drug, or rather his ais have
suggested a drug which might be a good solution, and
the early signs from I think the phase two tests
are very encouraging. So these things are transforming the field,
and they can transform it further provided we're wise about
it rather than naive about it. Then I want to
(01:40:29):
circle back to the very interesting point that Art was
making about, well, what happens when we use these things
in military circumstances, And any sensible military strategists will say, well,
of course, we're not going to let these ais make
autonomous decisions. That would be bad, wouldn't it. Then you
come to a real life war situation we have drones,
and drones say, currently on the whole they ask for
(01:40:52):
guidance from the human operators. Should I disable this person?
Should I attack now? But in the heat of the battle,
especially when drones come up against drones and you have
swarms of drones against swarm of drones, we humans are
too slow to make a decision, and so there is
an inevitable pressure for military leaders to take autonomous decisions,
(01:41:13):
allow the AIS to make autonomous decisions.
Speaker 2 (01:41:15):
There, drones on't going away.
Speaker 4 (01:41:17):
We're seeing drones being used, of course, in Ukraine and
in Russia with some spectacular results, spectacularly bad and spectacularly good.
We're seeing them surely in Iran and Israel as well
too as well are already using ais in many ways
to try and help them pinpoint which members in which
(01:41:39):
citizens in Gaza are those which have double lives as
a hammers operatives. It's by no means foolproof and it
makes mistakes, but they are using these ais currently mainly
with human approval. But of course there's going to be
pressure for them to let go of that human oversight
as well for fear that they will be out maneuvered.
(01:42:03):
So we are on a slippery slope here, and we
need to strengthen everybody's resolve not to go further down
that slippery slope. It's not going to be easy, but
what can help us out here is a shared recognition
by all sides that you know this is just too dangerous.
Organizations that develop these tools first won't be the winners.
There will be the precipitators of potentially something much worse.
Speaker 1 (01:42:28):
Yes, thank you for those remarks, David, Okay, Yes, go ahead.
Speaker 2 (01:42:33):
Bill.
Speaker 3 (01:42:34):
I'm hesitant. I'm debating whether I should say something because
I wouldn't have time to follow up on it. But
I find that the treatments that has come up for
treating any idiopathic pulmonary fibrocess to be very naive and
it's a disease that I study a lot, and I
(01:42:54):
think this is an example of the problem that I'm
talking about, and we can discuss that in more detail offline,
because I can, I can go through why I think
that the approaches are naive.
Speaker 2 (01:43:08):
Well, isn't I don't want to spend.
Speaker 4 (01:43:11):
What's that isn't the proof in the pudding that they
go through the tests if there are better results, Like that's.
Speaker 3 (01:43:19):
Like saying that every cancer drug that siregon through clinical
studies is perfect. Okay, It's it's we we have to
do better and we can do better, okay, and uh
it's it's like, yeah, we haven't this this. These things
will not cure idiopathic pulmonay for process, and just like
(01:43:42):
our cancer drugs do not cure cancer. But there are
there are a lot of ideas that have that don't
have the funding to to move forward, and and those
are being largely ignored. And it's it's it's just, you know,
I was National of the Year for my cancer research,
so I know a lot on that subject, and I'm
(01:44:03):
also doing a lot of work on idiopathic pulmonary for process.
So it's it's like it's a lot more complicated than
the models are making it look like, uh or less
I would say less complicated. Okay, we had to approach
it from a whole different angle, but that's why I debated.
(01:44:27):
But I could, I could spend a lot of time talking,
but I also to spend more time discussing the subject.
I would have to go back to what I've read
before and remember what it was. I just remember thinking, Oh,
he's missed the boat.
Speaker 4 (01:44:41):
Yes, I think that maybe you would largely agree. Alix
would say, he's not going to fix I PF comprehensively
with this, but he has something that will be a
useful palliative, useful reversal in some cases before we get
a proper solution, which will be a tacking aging of
the more fundamental level.
Speaker 3 (01:45:03):
Let me just put it differently, he's ignoring some of
the obvious things that should be considered and so.
Speaker 1 (01:45:11):
Yes, and I think that would be a great subject
for an in depth virtual Enlightenment salon as well, either
your research into cancer, Bill or your research into idiopathic
pulmonary fibrosis and where you think various approaches that have
been pursued have shortcomings, as well as other approaches that
(01:45:32):
you think could be successful but haven't received the requisite funding.
And I know unfortunately in the scientific community there are
many promising research pathways that could have an impact, but
they don't have an impact because there's not enough financial
support or because there are regulatory hurdles, and we as
(01:45:55):
the Transhumanist Party, need to be quite concerned about that
and thinking about how to enable those pathways to succeed. Now,
in terms of autonomous weapons, which were discussed, the US
Transhumanist Party supports the campaign to stop Killer Robots. That
is part of our platform, and we are concerned about
(01:46:18):
the potential for humans misusing AI. Indeed, I would say
our concern about that exceeds the concern in regard to
AI becoming rogue and acting against humanity on its own.
But if humans intentionally deployed destructive weapons systems without a
human in the loop, then that could lead to all
(01:46:40):
sorts of unintended consequences. In particular, one redline that must
never be crossed is giving AI any control over nuclear
weapons systems, because I could readily imagine scenarios where AI
would use dispassionate gain theoretic quote logic, which I think
is actually quite flawed for the record, to launch a
(01:47:01):
nuclear strike when humans, even through very raw emotional considerations,
may restrain themselves, like I don't think Donald Trump will
launch a nuclear first strike in part because Donald Trump
wants to stay around and enjoy his wealth and power.
And those may be not the most salutary motives, but
(01:47:24):
I think they act as a restraining influence on Donald Trump.
And I could say the same about Putin as well.
But would an AI system have those motives? To some extent,
the more venal motives of authoritarian leaning leaders may be
a protection from nuclear war that an AI system would
not have. But in terms of autonomous or quasi autonomous
(01:47:47):
systems being deployed, David, you are aware of this that
Israel had a system called Lavender, as you mentioned, that
was used to identify suspected Hamas operatives, and Lavender techechnically
had a human in the loop. So in terms of
the actual design of it, Israeli officers were signing off
on targeting decisions, but in practice there was a lot
(01:48:11):
of political pressure on them to just routinely approve the decisions.
And that's a concern as well that even if you
design theoretical safeguards there could be institutional pressures essentially leading
people to not really have a whole lot of agency
(01:48:32):
and overriding the AI. So humans need to exercise their
will here, and they need to exercise their judgment and discretion.
I very much think there always needs to be a
human in the loop, otherwise we face some potentially terrifying consequences,
especially given how the geopolitical situation the military engagements in
(01:48:55):
the Middle East have been intensifying. I think adding autonomous
weapons or even human in the loop systems where the
humans are not over ruling the AIS, would be extremely dangerous.
We have Anthony Nielsen joining us. He's our director of
Technology Outreach, and welcome Anthony. If you've been listening to
(01:49:18):
the salon with Zulton, let us know any feedback.
Speaker 4 (01:49:22):
That you have.
Speaker 8 (01:49:24):
I apologize I jumped in late here. All I heard
was talks about AI and over human overrides, and all
I will say is from my own experience having been
the person.
Speaker 2 (01:49:35):
That directly.
Speaker 8 (01:49:39):
Has to train these I mean, it's like any tool,
and I think that that's the big the big thing
that I try to express here as director of Technology
Outreach is like any tool that requires literacy, competency to
understand the tool that you're working working with. And AI
(01:50:02):
is not like I mean, it's not it's not an
autopilot thing. You can't just trust it to do its
own thing.
Speaker 2 (01:50:09):
It's not.
Speaker 8 (01:50:10):
I mean, anybody that's been watching Elon's promises of having
self driving automobiles can can vouch for this is you know,
like with there's there's this idea that you know, like
people that say we can trust the technology to do
(01:50:31):
its job.
Speaker 1 (01:50:33):
No, we can't.
Speaker 8 (01:50:34):
We can trust that it's a tool that still needs
to be honed and sharpened like any other tool. And
as somebody that has been employed to train these AI
learning models, you know on different platforms like Mercore and
oh there's another one, there's at least three or four
(01:50:58):
different online language learning AI mL platforms that are employing
people like me too as basically as subject matter experts
to train it. And essentially, as a result, they are
getting very discounted labor because, as you know, AI does
(01:51:24):
not need to take a break or be given employable wages.
But at the end of the day, yes, we need
to understand what we're working with.
Speaker 2 (01:51:34):
I think.
Speaker 8 (01:51:37):
I think the.
Speaker 2 (01:51:39):
And with regard to.
Speaker 8 (01:51:44):
Mister zul Town's stance on you know, being pro versus
anti AI. It's like, well, I think AI should be
met with a healthy amount of skepticism, but it is
a tool just like any other, and the rules is
set forth by Isaac Asimov still apply as far as
(01:52:09):
I've talked to any transhumanist, and from my own experience
as well, like we you know, as long as its
primary purpose is to serve us and advance the human condition,
whatever that human condition might be to our advantage, then
(01:52:31):
that to me is transhumanism.
Speaker 1 (01:52:35):
Yes, thank you very much, Anthony, and I very much
echo the view that AI is a tool right now,
and it's a tool that has its limitations, and it
requires a complementary set of human inputs and skills and
knowledge in order to be able to deploy that tool effectively.
And of course, the tool can be misused if it's
(01:52:57):
not deployed in the correct way. A knife can be
miss used, a vehicle can be misused. Fire, which is
one of the oldest technologies, can be misused if humans
choose to deploy.
Speaker 8 (01:53:08):
It importantly or like the great you know, the perfect
example is saying, oh, the cars can drive themselves. Well,
we see what happens when we let the cars drive
themselves right now, Well, and you know, being being told like,
well it's safe, Well, it obviously isn't.
Speaker 1 (01:53:31):
I've had some good experiences with Waymos and Los Angeles
are different.
Speaker 8 (01:53:37):
Waymos are a different technique there, and that's that's where
understand the technology that you're using and and the competency,
the literacy, the the technological competency and literacy is important
in my opinion as a transhumanist and as somebody who
(01:53:57):
shares that that idea of like, yes, there are these
great tools and you can't just trust that it it's
not magic. I as somebody who gets thrown into a
lab and treated like a magician. Yeah, it's hard to
bridge that gap in understanding and knowledge. But if we
(01:54:20):
treat that healthy amount of skepticism with the correct amount
of training and informational you know, availability, then we can
bridge that gap.
Speaker 1 (01:54:36):
Yes, absolutely, And we do have to unfortunately conclude our
Virtual Enlightenment salon for today, but comments like yours, Anthony
stress the need to continue to thoughtfully explore the issues
surrounding AI and other emerging technologies. I'll leave you with
these insights.
Speaker 2 (01:54:57):
Arthur C.
Speaker 1 (01:54:57):
Clark's third law expresses that any sufficiently advanced technology is
indistinguishable from magic. In my experience, in regard to particular
technologies or approaches, I find that the less people understand
about a given technology or approach, the more they tend
to think of it as magical. So with that, indeed,
(01:55:21):
so with that, we've had a fascinating conversation today with
Sultan and amongst ourselves. I hope that we can all
live to see a much brighter future, and for that
we need to live long and prosper