Episode Transcript
Available transcripts are automatically generated. Complete accuracy is not guaranteed.
Speaker 1 (00:00):
Greetings and welcome to the United States Transhumanist Party Virtual
Enlightenment Salon. My name is Jannati Stolierov second and I
am the chairman of the US Transhumanist Party. Here we
hold conversations with some of the world's leading thinkers in longevity, science, technology,
philosophy and politics. Like the philosophers at the Age of Enlightenment,
(00:23):
we aim to connect every field of human endeavor and
arrive at new insids to achieve longer lives, greater rationality
and the progress of our civilization.
Speaker 2 (00:34):
Okay, new presentation.
Speaker 3 (00:37):
Gennadis Tolero, a regular participant in a confidence.
Speaker 2 (00:42):
There is a poster.
Speaker 3 (00:44):
On the wall over there with some more details about Gennadi.
Of course, there will be another one poster electronic at
the end of the confidence with full details because they's
becoming permanently.
Speaker 2 (01:00):
Gennadi, he is having a lot of activities. Now.
Speaker 3 (01:03):
He will speak about the progress of artificial intelligence. But
also I would like to announce that in advance he
is author of Left Game, and I will speak about
live Game immediately after Gennadi in order to make sure
that he will be able to hear my presentation. Because
(01:26):
in Nevada at the moment is midnight, so not to
force Gennadi to stay all night away straight on, we
will continue according to the time zone.
Speaker 2 (01:38):
Genabi, you have the flow.
Speaker 1 (01:40):
All right, Thank you very much, Angle, And it is
an honor as always to present at the Vanguard Scientific
Instruments in Management conference, and hopefully you will recognize the
timeliness of the topic. Artificial intelligence is advancing rapidly, and
(02:04):
I think it is important, especially as transhumanists, as futurists,
to bring some balance, nuance, rationality, and proportionality to a
conversation that I think, especially these days, is overly dominated
(02:24):
by both height and fear. So today I hope to
point out both the immense progress that AI has been
making as well as to assuage, let's say, some of
the more extreme concerns about artificial intelligence completely displacing humans
(02:47):
or making humankind, human existence, human worth, and purpose obsolete,
because I certainly do not think that AI will have
that effect. I think AI can have some disruptive effects, clearly,
and we need to be aware of what those are.
(03:07):
But right now, it seems all too often the conversation
is dominated by a combination of hype and doom. Saying
how many times have all of us heard the following
statements or something along these lines, That AI will take
all jobs and make human workers obsolete, that artificial general
(03:30):
intelligence AI that can learn and perform potentially any ask
without being specifically trained to do so is imminent. Some
have posited there will be an AI singularity by the
year twenty twenty seven, so now less than a year
and a half from now, based off of large language models,
(03:52):
which is essentially the current state of the art technology
for building AI systems that most members of the public
are familiar with, like the chatbots chat gpt, Grock, Gemini, Claude,
or the AI art generation platforms like mid Journey or Dolly.
(04:15):
Grock has a built in AI generation platform as well.
Some people say, because AI is able to synthesize such
vast amounts of knowledge in so many disciplines and provide answers,
summaries overviews that at least seem plausible to a late person,
(04:36):
you will never need to study or learn anything again
because of AI. And on the other hand, some people
will say, well, AI will make you lazy, it will
make you over reliant on it, your brain will atrophy
because of AI, because you will outsource too many of
your cognitive tasks to AI, and so that will spell
the doom of humans actually doing anything meaningful, and it
(05:02):
will lead our species to decay and deteriorate. Some people
will say AI will inevitably escape all human safeguards and
destroy humanity. It's Eliezer Yukowski's set of talking points that
if anyone builds an artificial general intelligence, everyone will die.
No matter what we do, we are doomed, and the
(05:24):
best we can hope for. Some will say, and sometimes
these people even portray themselves as being optimists, is as
AI replaces us or even ends our existences, we can
hope that AI will be a worthy successor to humanity.
Now I don't buy any of these talking points. I
think such statements stoke sensationalism and they do not advance
(05:47):
anyone's well being. Well, maybe they advance the momentary fame
and visibility of those who are making these kinds of claims.
And I think, unfortunately, a vulnerable ability of human psychology
is that people tend to fall for the more extreme statements,
often because we evolved in an environment where if we
(06:11):
see something behind a bush in the distance, well we
can assume that it's something benign, or we can assume
that it's a predator. And if we assume that it's
benign and we go about our way, well maybe we'll survive,
or maybe it actually is a predator and it will
eat us or do something else to end our existences.
So our ancestors who survived tended to assume that it
(06:35):
was a predator. And I think with any new development,
including AI, people are already primed through their let's say,
evolutionary baggage to make that kind of assumption. But I
think it's important to consider what are all of these
statements missing, why are they incorrect, And at the same time,
(06:57):
why is it so tempting for so many people to
make these kinds of statements, both in terms of AI
hype that AI in the course of a few years
will just displace everybody and make our species obsolete, as
well as AI dooomerism, which states that AI will become
so powerful that it could eradicate the human species. And
(07:20):
some people will assign extremely high what they call P doom,
like ninety nine percent that humankind will become extinct. So
I will say my p doom for what it's worth
is somewhere less than point zero zero zero zero one.
But I think both the AI hype camp and the
(07:41):
AI doom camp misapprehend what the distinguishing advantages of humans
actually are. And I don't fault them for doing that
because for millennia, human philosophy, discourse, literature has actually taught
us that these are among our most distinguishing advantages. But
(08:04):
for example, are humans uniquely the best at creativity? Well,
we've seen AI systems that can create some impressive art
and music, and sometimes those creations seem no less original
than the ones that humans have come up with, And
this observation even precedes the current generation or recent generations
(08:29):
of models that can create art and music. There was
a composer named David Kope who unfortunately died earlier this year,
who created AI generated music in the style of various
composers going back to the nineteen eighties and nineteen nineties.
(08:50):
He had AI systems called EMI Electronic Musical Intelligence and
Emily Howell that were able to both compose in the
style of historical composers and create some interesting sounding original works.
So AIS could be creative. I think are humans uniquely
superior at logical reasoning. While we see many AIS, including
(09:14):
commonly used large language models, that can already reason better
and more comprehensively than most humans, certainly they are less
likely to fall prey to very common fallacies or common
propaganda techniques. What about manual dexterity and spatial orientation? Those
are actually more challenging skills for artificial systems to learn,
(09:40):
But some impressive humanoid robots are emerging and various companies
are at work on developing these robots. If you've seen
the videos from Boston Dynamics, for example, or if you've
seen any of the prototype household robots that are being demonstrated. Now,
(10:00):
they have a way to go before they can navigate
physical environments with the same fluidity and dexterity that humans can,
but clearly there doesn't seem to be a hard limit
to these capabilities. What about computation and information processing. Well,
clearly even computers that have existed for decades are better
(10:23):
than humans at these and AIS can also far surpass
humans at these tasks. Now, what about compassion, empathy, and
emotional intelligence. These have often been presented as unique human attributes,
but some people have engaged in dialogue with large language models,
(10:47):
and it seems, at least in many cases, that those
models are able to offer responses that those people find
to be more compassionate or to you, in some ways,
be more understanding or more reflective of their situations than
responses of humans. I think sometimes even people may be misled,
(11:12):
for instance, into using AI for purposes of therapy, where
the large language models are actually not sufficiently advanced or
tailored for that purpose. But people do find some of
their responses to be emotionally compelling. What about ethics, morality,
(11:35):
and wisdom? Well, I would contend that perhaps any one
of today's commonly used A eyes would do a better
job than today's crop of dominant politicians, business leaders, and
cultural figures. And I know that's a low bar to exceed. However,
I will make that observation. I would rather have an
(11:56):
AI as president than the the presidents or other national
leaders that currently dominate the world. So where does that
leave us? So AIS have previously defied predictions of what
AIS could never do. And this is now a famous
(12:18):
cartoon from Ray Kurtzweil's two thousand and five book The
Singularity Is Near. In this character at the desk is
meant to represent the human race. And on the wall
are all of the statements of what AI cannot do
or what only humans can do. And on the floor
all of such previous statements that have been discarded, but
(12:40):
the human race continues to write new statements. But what
is interesting about this kind of critique and these kinds
of statements that AIS seem to essentially defy the limitations
of these are statements about specific capabilities like playing chess
(13:03):
or improvising music, or understanding speech or translating. So these
are specific tasks that it's alleged that AI can never do,
and aisystems are developed to do them. But these are
not so much broader attributes of the kind that I
(13:23):
will be discussing. And this is also timely because it
was one of the key areas of focus in our
twenty twenty four election campaign, where our presidential candidate Tom
Ross had a major emphasis on the need to focus
on attributes that cannot be coded or automated away. So
(13:46):
it is possible for there to be major disruptions in
the job market because of automation and artificial intelligence. In
the past, when new technologies have been developed, it is
true that certain professions, certain lines of work, certain tasks
were made obsolete humans no longer needed to perform them,
or what those tasks looked like became very different. So
(14:10):
one of his three campaign initiatives, the Earthly Initiative, was
intended to prepare Americans for the technological and economic singularity.
I think it's more of a long term endeavor to
engage in such preparation, but it's quite worthwhile to think
about what truly makes humans stand apart. What human skills, faculties,
(14:34):
attributes truly are irreplaceable and will not be made obsolete
by AI anytime soon. And some of this transition could
indeed involve support for people who are temporarily displaced from
their jobs, ideas like a universal basic income or retraining initiatives,
(14:58):
educational tools harnessing AI for the purpose of helping humans
build new skills or discover the skills that they already
have that perhaps were not explicitly articulated or deployed in
such a way as to enable those humans to thrive.
(15:18):
Because if we get this transition right, AI and automation
can bring about immense improvements to living standards. So one
of Tom Ross's many advertising campaigns focused on this idea
of paridism. So you can see here post laundry automated utopia,
or a post vacuuming automated utopia, or a post mowing
(15:44):
automated utopia where the robots can do all of these
tasks so that you can enjoy post scarcity automated weekends
and a robot can come by and charge your electric
vehicle while you are engaged in whatever activities bring you
meaning and fulfillment and enjoyment in your day. So that's
(16:06):
the kind of future that we want. We don't want
a future of hype and fear. We don't want a
future where humans are a rate against AI. We want
a future where humans coexist with AI. And that means
we need to know the distinguishing advantages of AIS, but
also the distinguishing advantages of us humans. So the arrival
(16:30):
of artificial intelligence and many spheres of life has caused
fundamental rethinking of this question what does it mean to
be human? Because clearly what traditional philosophies, conceptual systems, or
colloquial understandings have taught us as being, the distinguishing advantages
of humans are perhaps less exclusive than they have historically seemed. However,
(16:55):
other attributes that were overlooked or taken for granted actually
turn out to be essential advantages that I don't think
AIS will possess anytime soon. So what are the actual
advantages of humans? I thought of ten, and interestingly enough,
I asked both Rock and chat GPT, what do you
(17:18):
think of this list? Are these actual advantages of humans
over AI systems? And both Rock and chat GPT agreed
with me on each of these points, for what it's worth.
But I think these points will stand on their own
irrespective of what AI systems have to say about them.
(17:39):
But I think one fundamental advantage of us humans that
most people just let's say, let exist in the background
of their lives, but don't really verbalize as explicitly, is
simply the fact that we have persistent identity or subjective
continuity of vantage point what I call the inus. So
(18:01):
just the mere fact that I perceive the world as me,
and this is a continuous perception, a kind of process
continuity of my existence which builds upon itself, and AIS
don't have that. So every time you run an instance
of a large language model, it's a different instance, and
(18:21):
it may draw on a common knowledge base or training
data set. But every time a different user interacts with
an LM, it's not the same entity with the same continuity.
It's a mistake to say, well, chat GPT is a
singular entity. No, it's just code that gets deployed in
(18:44):
similar ways but with different details to different people. Likewise,
AIS have relatively short context windows. If you engage in
a conversation with one of today's large language models, the
context window what it remembers tends to persist over a
few tens of exchanges. It can vary. The context windows
(19:08):
have grown over time, but for humans, the context windows
span decades. We have memories that go back to some
of the earliest years of our lives. Although they are imperfect,
we tend to remember the moments or ideas or experiences
that were at least understood by us to be the
(19:30):
most important. Furthermore, we can gather empirical evidence autonomously through
very physical senses. This is not the case for AI
models that generally receive training data. Maybe they have access
to the Internet, but the vast majority of them are
not embodied. They cannot just go out into the world
(19:51):
and learn through seeing, touching, hearing, smelling, they could receive
descriptions of what that sensory is experiences like, but this
is not anywhere close to actually having that experience, So
the way in which they will reflect that is a
kind of faded simulacrum of what we can fathom directly
(20:16):
and in extreme richness of detail just by looking around
and engaging our other senses. Also, humans have tremendous versatility
in learning almost anything, at least roughly to a first approximation,
if you consider how a young child learns and the
curiosity that a young child displays toward various domains of
(20:40):
human endeavor. AI systems can only learn in domains where
they are trained to learn, and a child might not
become extremely good at something for a while, but at
least they can take the first steps into almost any domain.
Humans have resilience and extreme adaptability even in the face
(21:04):
of severe survival pressures, in the face of wars, natural disasters, poverty,
all sorts of suboptimal conditions of life, they are able
to figure out responses that will at least get some
of them through, and some people, even with imperfect and
inconsistent responses, will have generally the right responses to any
(21:28):
given set of pressures, and hence human beings have survived
a variety of cataclysms. AI systems require environments that are
finally optimized and tailored to their existence, and they can
function tremendously well within a set of background conditions that
are actually quite fragile. If one has studied history to
(21:51):
any significant extent, so in any sort of crisis or cataclysm,
my bet would be that humans will outlastday systems. Also,
humans have the ability to implement rough heuristic solutions when
precise approaches are not available or too costly. Sometimes AI
systems will overthink and get stuck in infinite loops, or
(22:16):
try to devote undue amounts of energy and resources to
problems that could be solved, perhaps again not as precisely,
but in a manner that's good enough to suit the
needs of the entities that are actually affected by these problems,
and humans tend to have a decent grasp of when
(22:38):
a heuristic solution is good enough. Also, humans have undertaken
intuitive integration of life experiences, and many of them are unspoken.
They're tacit generalizations drawn from numerous situations whose lessons cannot
be easily verbally explained or written down, But that essentially
is what intuition is, and it's formed over the course
(23:00):
of a lifetime. Sometimes one cannot rationally explain simply because
language is deficient, why one may think that in a
given situation certain consequences might arise. Maybe one might have
an uneasy feeling about a particular situation, And that's an
intuitive integration from having encountered many similar situations in the
(23:26):
past and perhaps catching on to certain clues or context
that the AI system, especially if it's not embodied, it
doesn't have the human senses or long context window, just
wouldn't have a reason to perceive that way. And also
common sense, well, common sense is not so common among humans,
but it's far more prevalent among humans than AI. So
(23:50):
sometimes humans again will pick up on the context and
will recognize a particular solution is suitable or it's not suitable,
will simply evaluate the solution on some set of let's say,
technical merits or criteria that may or may not fit.
Humans also have an advantage, and I think this is
(24:12):
an advantage of satisficing over optimizing. So satisficing is knowing
when a solution is good enough, even if more work
could be expended to make it marginally better. And humans
definitely have inner experience. We know it because we ourselves
as individuals have inner experience, and unless one falls into
(24:35):
what I think is the very fallacious mindset of solepsism,
I think it's reasonable to conclude that entities with very
similar biological makeup to ourselves will also have that inner experience.
Maybe someday AIS will have inner experience as well, but
certainly current large language models do not have the equipment
(24:55):
for inner experience. And inner experience is important if we
want to around entities that actually care about us, at
least sometimes, rather than just mimicking the appearance of doing so.
So the AI therapists, well, they may give some good
advice sometimes if they're configured properly, but do they actually
care about the people interacting with them while they don't
(25:17):
have the inner experience to be able to care. So
this is interesting too because some AIS may differ as
to whether or not they have common sense. Again, I
asked rock and chat GPT do you have common sense?
And Groch said yes, I've got a decent grasp of
common sense, enough to navigate everyday situations and make reasonable
(25:38):
judgments based on practical knowledge. I'm built to understand the
world in a way that aligns with how humans think,
so I can usually pick up on the obvious stuff
and avoid the absurd. But if you throw me a
curveball from some niche corner of human experience, I might
need to lean on my data or ask for a
bit more context to nail it. So GROC does acknowledge
that limitation, and later on Grock said, well, yes, I
(26:01):
have some simulacrum of common sense, but humans could still
be better at common sense.
Speaker 2 (26:06):
Chat.
Speaker 1 (26:06):
GPT, I think was a bit more accurate in the
initial characterization. I can apply reasoning that often looks like
common sense, drawing on patterns from everyday life language and logic.
For example, if you say it's raining and I forgot
my umbrella, I can infer that you might get wet.
(26:26):
But unlike a human, I don't have lived experience or
gut feelings. My common sense comes from recognizing patterns in
how people usually talk and reason about the world. That
means I can sometimes miss things that would be obvious
to someone who's been in these situations personally. And they
(26:47):
all then ask you a standard question, well do you
want to explore this further, etc. And I think sometimes
that formulaic way of answering questions is also more characteristic
of A is less characteristic of humans. So there are
also problems that humans are still uniquely good at solving,
(27:07):
and these are abstract cognitive problems. Usually these problems involve
combinations of skill sets, for example, pattern recognition, spatial perception,
or logical reasoning that AI systems can tackle individually fairly well,
but they fail to properly combine or importantly recognize which
skill to apply at which time, whereas, again, humans have
(27:28):
the intuition to do this without really that much difficulty.
So Fancois Chalais created the set of problems, and there
are three generations of these problems called the abstraction and
reasoning corpus RCAGI. So the idea is if an AI
(27:50):
system can be developed to solve these kinds of problems,
then it could make a plausible claim to being an
artificial general intelligence. There's an interesting article on this called
AI's achilles Heel Puzzles human solve in seconds often defy
machines from the Scientific American. I would encourage you to
(28:11):
read that, and also you can try all three sets
of the RCAGI problems at Arcprize dot org. So to
test what is an AGI, one should have a definition
of AGI. So this framework is based on the definition
that AGI is a system that can efficiently acquire new
(28:33):
skills outside of its training data. Han Swashalle wrote that
the intelligence of a system is a measure of its
skill acquisition efficiency over a scope of tasks with respect
to priors, experiences, and generalization difficulty. And he wrote this
in twenty nineteen, So before the current crop of large
(28:54):
language models, no current AI system is anywhere close to
embodying those characteristics. For example, we have impressive chess playing
AIS that can be humans. No chess playing AI today
can learn to drive a car without being taught. We
have impressive AIS that can drive cars, though I've written
in way Mo self driving automobiles, I would trust them
(29:16):
to a greater extent than I would trust the vast
majority of human drivers. But no self driving car AI
can learn chess without being taught. A large language model
cannot learn to fold laundry. How could it even to
the level of a current humanoid robot, at least without
being taught. Without being put inside a robot and explicitly
(29:38):
given the routine for folding laundry. But a humanoid robot
that's taught to fold laundry cannot learn philosophy even to
the level of a current LM, again without having a
database of philosophy texts and ideas uploaded into it. So
none of these are agis. I don't see a a
(30:00):
path that is very directly apparent from these current systems
to an AGI. I tend to agree with Peter Ross
that a different architecture may be needed to get to
a true AGI. But let's see an example of an
ARCAGI problem. And this is just a problem that I
(30:20):
selected at random, but it's a very interesting problem. So
on the left hand side and in the center you
see examples of inputs and outputs. So what happens when
you have this figure here, the red square with a
dot in the middle, and then you have essentially these
(30:44):
eight black dots around it, and you see the output, Well,
it's the same, but it's filled with a blue color.
And you see another example of an input where now
there are more black squares in the middle, and you
see now there are two columns that separate the center
dot from the square here that gets filled with yellow,
(31:08):
and you see two more examples. Now these will I think,
challenge an AI because now you've got multiple shapes. But
really there's only one new rule to be inferred from this,
and a human can very readily infer that rule. It's
about filling this green space here where there are three
(31:30):
black columns between the frame of the square and the
central red square here. So how do you fill this?
That's the problem, and I think we humans can tell
the answer very quickly if we understand the given condition.
(31:51):
So this one will be green, this one will be yellow,
and these two will be light blue. That is an
the solution. No AI can solve this today. So why
will humans remain needed in the workforce? We talked about
some general human attributes that I think will not be
(32:13):
replaced by AI anytime soon. How do they apply in practice?
I strongly think that the future is not one of
AI replacing humans, but rather one of humans who use
AI sometimes replacing humans who do not use AI. So
perhaps the nature of the tasks we will perform with
the aid of AI will change or evolve, and those
(32:34):
who learn to use AI effectively will keep up with
that evolution. Hallucinations will continue to be a persistent problem.
Either they're inherent to the design of large language models
and AI experts such as Peter Voss and Rathi Kumsia,
with each of whom we've had virtual Enlightenment salons that
(32:56):
I have hosted at the US Transhumanist Party or at
the LEASED. These hallucinations are very difficult to extricate, and
you always need to be vigilant. You always need to
know how to pursue certain increase further, or design prompts
that are less vulnerable to hallucinations. And unlike human confabulations,
(33:16):
which often provide emotional or circumstantial clues that something appears
to be off, the LM hallucinations will appear plausible to
the uninformed reader. So unless you're a subject matter expert,
you might miss the fact that the AI has hallucinated,
and therefore deep subject matter expertise will become more essential
(33:37):
than ever to differentiate usable AI generated content from flawed
or hallucinatory content. So the more AI is used in
for instance, scientific research or writing reports, writing about any
subject that involves facts or complex concepts, the more precise
(33:58):
you need the answer to be, the more you will
need subject matter experts to check the output of the AI,
and also human editorial and curation skills are essential to
enable the AI generated content to be deployed in practice.
I have seen now numerous instances of AI generated content
which seems to be ninety five percent to ninety nine
(34:21):
percent fine, but the other one to five percent, if
it were allowed to remain as is, would be clearly
seen as absurd or inaccurate or even let's say damaging
to the reputation of the person or organization that deploys it.
So you need a vigilant I. You need a skillful
(34:41):
editor to detect which one to five percent to remove
or replace or ask another AI to provide the actual
good content there. And also if you have a larger
project and AI cannot create your entire project for you,
what's specific pieces of AI generated content belong in a
(35:03):
work product, in what sequence and how do they relate
to other AI creations and human creations, Because I think
a lot of work products outputs they're not going to
be exclusively human or exclusively AI. They'll have elements of both,
and which elements work best in which places in which projects.
That will be a human skill set to determine and
(35:23):
then human prompting skills will be necessary to even generate
decent AI content in the first place, because we know
even with computations, if you ask an AI system a
large language model to multiply two large numbers, the answer
it will tend to give you will be accurate to
about three significant figures, and then it will be inaccurate
(35:46):
from that point forward. However, if you ask it to
give a precise calculation, or if you challenge its first attempt,
it will engage a subroutine, maybe it will run some
code that will actually give you the correct an So
it tries to give you a heuristic answer that is wrong,
and then if you prompted in a different way, it
has the ability to give you the correct answer, and
(36:08):
then AI generated prompts for AI. Some people will say, well,
human prompting is not going to be around for long
because the AIS will be able to make better prompts,
but that misses the point because the AIS don't know
what they don't know, and in particular, they don't know
the needs of the users or the standards by which
the users will evaluate the outputs. So those unarticulated needs,
(36:30):
and I think we at this conference are in the
general transhumanist futurist space are quite articulate individuals. Most people
will not have that kind of patience to articulate what
their needs are in such a way that the AI
will predict it. So I think human prompting is here
(36:51):
to stay. And then also in any given organization, most
knowledge is tacit, and even in any given person who
has worked in a particular role for a long time,
it is a distillation of years of experience. It's often
not written down in a procedure, manual, or other legible format.
Most knowledge is distributed across persons, so different people may
(37:11):
hold different pieces of knowledge that is essential to making
an organization work. And AIS also may generate valuable knowledge
and insights, but those will be different from the knowledge
and insights that humans can generate. AIS will have some
pieces that are needed to make an organization work, but
many different humans will continue to hold on to other pieces,
(37:33):
so it will be a kind of heterogeneous landscape of
knowledge and experience. And similarly, while AIS may generate impressive
writing or artistic output, humans will continue to be able
to generate writing or artistic output. It might not be better,
it would just be different by virtue of these human
minds being different from let's call them the minds of
(37:55):
any AI systems, and this difference will continue to have
value just like different and so those among humans that
lead to different outputs have value in our society today.
And then persistent human identity contributes to the formation of
institutional memory and an organization, and that institutional memory is crucial.
I've seen what can happen when institutional memory is lost,
(38:17):
for instance, through rapid personnel turnover. And I think a
lot of companies in Silicon Valley today are making huge
mistakes by laying off thousands of workers under maybe the
excuse that AI is rendering those jobs obsolete, because I
don't think that is actually what is happening, but they're
losing a lot of that long context, a lot of
(38:41):
that institutional memory, and institutional memory is passed on gradually
in an organization, often without being cataloged or proceduralized through
everyday interactions. And some people will say, well, okay, if
you have experience, if you have subject matter expertise, you
will remain valuable. But what about entry level workers. Can't
(39:01):
AI do the simple entry level tasks that are often
delegated to these individuals. But I would say even entry
level jobs will have value because existing decision makers should
value the continuity of vision and implementation. Ultimately, people in
their current positions in an organization, even in top leadership positions,
(39:23):
aren't necessarily going to be there always. Aren't necessarily going
to want to always be there. Maybe the transition is
something they welcome if they want to go on to
do other things, But then they would wish to preserve
what they have built. And the best way to achieve
that continuity of implementation is to train and mentor one's
eventual successors and to secure their loyalty by treating them well.
(39:48):
And I don't think an AI quite would do that task.
At least I don't think an AI could be programmed
to have that continuity of vision over the course of
many decades, as contrasted with a human successor. So now
we come to executive decision making. While AI could plausibly,
(40:10):
as I said, replace world leaders and CEOs of large companies,
and I'll explain why with little ill effect, I think
it would take a long time for AI to be
able to replace a competent small business owner. And this
seems like a paradox. But I think we understand that
running a business effectively requires general intelligence, paying attention to
developments and a large number of disparate domains calibrating effective
(40:33):
responses under various constraints. So narrow AI can't do all
of that. It might be able to assist in some
specific tasks, but even for effective project management or coordinating
social interactions, those kinds of capabilities are required. In recent years,
there have been various attempts, especially when the first really
(40:54):
impressive large language models came out, to run a business
based entirely on the advice of AI system so do
whatever the AI system says, And even for AI systems
that drew upon vast knowledge bases, those attempts failed miserably.
Why is that because the AI systems could not grasp
what was happening in real time that affected the opportunities
(41:15):
and constraints of that business. So the AI system might say, well,
I recommend that you purchase so much of this input
for your business, and in theory that may be a
reasonable recommendation, but the fact is the market for these
inputs is changing consistently, and maybe the AI was giving
you advice based on data that had ingested two years ago,
(41:37):
and you can't find those inputs today, or what you
find is really different. And that's just one possibility out
of millions of ways in which the AI advice could
be misaligned with real time reality. But why can AI
replace world leaders and CEOs of large corporations Because those
people are similarly removed from the real time time reality
(42:00):
on the ground, because rigid hierarchical structures keep them away
from that reality through often intentional information filters and people
who don't want to give them bad news and people
who act as screens for them. That they really often
exist in a bubble, and so they could be more
(42:22):
readily replaced by AI than active managers who actually get
things done.
Speaker 2 (42:27):
So you could have a.
Speaker 1 (42:28):
World leader that is an AI and that gives informational
speeches that articulates a general agenda for the country, or
a large multinational corporation, but you'll need human implementers to
actually realize any of that vision. So to conclude, I
think this is an idea that anybody can apply to
(42:55):
make sure that one endures and remains valuable in an
era of increasing deployment of artificial intelligence. I think the
greatest value a human being can bring is in their
individual life story and the way in which the story
influences everything that human does, because no AI can capture
(43:16):
that fully with the same depth and nuance, and leveraging
the unique attributes and advantages derived from one's life story
is a way to reliably differentiate oneself from AI and
remain resilient to any disruptions involving AI. And I say
this knowing full well that there are attempts right now
to create AI avatars of people, including people who are
(43:40):
alive today, and that will continue to happen. But the
fact is the AI avatar only knows what you give it,
so your writings, sure, even say informal communications or notes,
But can an AI avatar have every moment of your
lived experience and capture every nuance and really anticipate how
(44:02):
you would react to a novel situation? I think an
actual human will always have an advantage in that. And furthermore,
humans are drawn to the stories of other humans, and
this will remain the case even if AIS become technically
better storytellers. Sure, the AIS could entertain us, but if
we're looking for meaningful stories about people that are relevant
(44:25):
to us, as people, I think we will continue to
need the stories of other people for that purpose. So
hopefully I've dissuaded some people who think that doom of
any sort or human obsolescence of any sort from AI
is imminent. But certainly a lot of change and a
lot of transitions are going to come, and the Transhumanist
(44:47):
Party tries to tackle these kinds of transformations and figure
out how they can be harnessed beneficially. So I would
encourage anybody to join us from anywhere in the world.
Member Ship is completely free at transhumanistash party dot org
slash membership for if you're apolitical, we have an affiliate
organization called the transhuman Club, which is more about art,
(45:12):
education discussions of the sort technology as well. So please
visit our websites, visit our weekly virtual enlightenment salons, and
let's continue the conversation on these very important changes that
are happening. How can we ensure that the human species
(45:34):
remains I think, in a deserved place of relevance and
continued improvement of our conditions. Thank you very much.
Speaker 2 (45:43):
Okay, thank you, thank you very much for your presentation.
That's one second.
Speaker 3 (45:52):
Next step is to open discussions there is a number
of people with AI background moment in the conference on
different branches a day. I see doctoral students postbox and
actually David already started discussion in the cup, so please
(46:13):
hear questions and comments for all of course, okay, questions
and comments.
Speaker 4 (46:19):
At okay, No, I just wanted to say, and there
was just mentioned that for example, to play chess, it
should be explicitly trained to play chess.
Speaker 5 (46:32):
But actually if we look into lay into the Salpa
zero and Alpha go, they were trained on playing against itself.
That means that if the task can be formulated as
a reinforcement learning in seting with more or less clear
value function, and for chess and the goal, it's pretty
(46:56):
easy to measure whether you want or not. Actually you
don't need the human training datas. The issue is that
for more complex real life it's extremely hard to come
up with appropriate reinforcement learning and you will see all
types of value function taking when it tries to chin
(47:19):
the system.
Speaker 1 (47:20):
That was just a moment, yes, thank you, and I
think that's a fair observation. I do think that the
ais that are trained to let's say, excel in domains
where the criteria success are very clear like chess and
go are trained in a fundamentally different way from the
(47:41):
ais that operate in the physical world because those ais
need to take in a lot of data from sensors.
So let me give an example. It seems like Tesla's
self driving car ais are failing right now because of
the decision that Elon Musk made to not use light
(48:01):
ar technology and rely solely on the cameras of those vehicles.
So it seems that there's insufficient empirical input there. But
at the same time, it seems that those AI systems,
the successful ones, like the way Mo systems for instance,
even though they take in all of this input, they
wouldn't teach themselves an abstract task with well defined criteria success.
(48:28):
So this is what I mean when I observe that
they cannot cross domains very readily. Now, if there are
two activities that are somewhat similar, maybe chess with different rules,
Like there's three dimensional chests that you can sometimes buy
sets of three dimensional chess. Some have even devised like
(48:50):
rules for four dimensional chess. Maybe a system like that
could learn those games, but that's still very proximate. I
am still skeptical that we're anywhere near a system that
can go from being trained in just one domain or
not being trained at all, and somehow mastering an entirely
(49:14):
different domain that requires different skill sets, almost a different
framework of functioning that we have within our cognitive and
sensory equipment to learn even from a very early age.
So this is a way again in which we're just
more versatile than the AIS.
Speaker 5 (49:33):
Well talking about for example, three D world does actually
right now, what man you expect is that a lot
of synthetic three D data that they simulated in three
D worlds. Soon we will see a lot of so
called world models. Some models trained in synthetics, redegenerated environments
(49:55):
then will transfer learn based on limited data a real
world environment. That's actually happening in robotics when they do
pre training with post training steps that they could perform well.
But overall, what you raised about humans is that what
we know the humans are way more efficient because they
(50:16):
have less training data and AIS. So that's what can
be improved. Ansor is an architecture. I think it's by
meta and they are like in the fire lab, the
training the GPA based models that try to simulate human
learning by this embodiment in the virtual kind of body
(50:40):
and trying to generalize among multiple modalities. So it's slower
than multimodel LEL lamps. But I think in like some
numbers of years, actually it would be coming.
Speaker 1 (50:54):
Very interesting, and I could see how that could give
the AI system some limits embodiment. I remain skeptical that
they could replicate all five of the main human senses
and the various nuances of them. I think perhaps vision
(51:16):
might be the easiest to some extent, maybe hearing, but
what about taste, smell, touch, Those require some let's say,
more advanced systems. Unless those three D worlds also have
like haptic interfaces and these AIS could somehow experience taste,
(51:38):
there will still be limitations.
Speaker 2 (51:39):
I think.
Speaker 1 (51:40):
Again, these are steps forward towards the kind of general
purpose AI which I think will be feasible at some point.
But my inclination is to think it will be feasible
in the twenty forties, not by twenty twenty seven, as
some willis, but to be sure, steps are being taken
(52:03):
in that direction, and I actually welcome that kind of progress.
So unlike people who say, oh Agi or Singularity is
coming by twenty twenty seven, and we need to be
afraid or worried. I say, well, I don't see signs
(52:24):
of the AGI singularity coming soon, and I'm a bit
sad about that, because agis could solve a lot of
the problems that our species faces. Clearly, some thinking should
be dedicated to ensuring that they follow moral constraints the
way humans are taught to follow moral constraints. But I
(52:46):
think that problem is solvable still. If AGI comes sooner,
and I think the soonest it could arrive would be
next decade, the twenty thirties, I will be pleasantly surprised.
I want to be pleasantly surprised in that way. Okay,
I'm also optimistic.
Speaker 2 (53:05):
Thanks, Okay, thank you.
Speaker 3 (53:07):
I would like to give a positive example of anton
and Olivia for foreigners physically visiting our conference. Already for
the second time, Antony is having a permanent chair on
the table. So I do invite again all foreigner participants
(53:30):
to visit. If not this, then the next around the conference.
Evidently I have to make more active advertising campaign.
Speaker 2 (53:42):
I will correct for the time. Anyway. More questions and comments?
Are they? More questions and comments? David, would you like
to add something.
Speaker 3 (53:52):
In addition to what you already put in the job.
Speaker 6 (53:57):
Well, my first question is how long do we have
because I think this conversation could easily happen.
Speaker 2 (54:04):
They so discussion is more important than the time.
Speaker 6 (54:08):
Yeah, because we might continue this discussion until at least
twenty twenty seven. I think it's a really important set
of ideas to figure out what is unique about humans.
I don't agree with most of the list that Ganadi presented.
I think the list that Gannadi presented of ten items
probably splits into three. As I said in the chat,
(54:32):
some of them I think AIS can already do quite well.
Like they're not all rigid optimizers. Some of them are
quite good at satisfying the bulk of the items come
down to Whell. The AI is not yet generally intelligent.
It doesn't have common sense, It doesn't have the ability
to transfer learning into totally different fields. And I accept
that we don't have that yet, but I do think
(54:54):
that we could have AGI quite soon. Not just by
scaling up large language models, so that's clear. That's not
by itself going to succeed, but it's by joining large
language models together with other elements such as what Peter
Vorce is developing or there's all kinds of initiatives that
Ben Gutzel and others are trying which include neural networks
(55:16):
as part of the overall brain. But there are other
interesting developments, and these other interesting developments, by the way,
can be accelerated by using neural networks to help the researchers.
So I think it is possible. It's unlikely, but it
is possible we could have general intelligence by twenty twenty seven,
as some people have suggested. What remains, however, is there
(55:38):
is one and a half items in Gnardi's list, half
of the first and maybe the last one, where I
think it goes beyond general intelligence. It goes to phenomenological consciousness,
the sense of an inner eye rather than just the
appearance of an eye. And it's not clear to me
that anything that AAI researchers are doing will get there.
(55:59):
It could be that it emerges naturally unexpectedly, in the
same way that other aspects and characteristics of llms emerged unexpectedly.
But I think it's also feasible that so long as
AI models are based on anything like the hardware they're
currently running on, these systems won't be phenomenologically conscious, and
(56:19):
to that extent humans will have a unique value going forward,
but still there are reasons to worry. The reasons to
worry are that the ais, even though they are not
phenomenologically conscious, they may still be used by ill intentioned people,
or by naive people, or by over optimistic people, or
by people who are deeply frustrated and hate the world,
whether it's Hamas terrorists or other kinds of terrorists, whether
(56:43):
it's people who are just fed up of the world
because of other reasons. They could use AI for terrible purposes,
which is why I think we need to prioritize addressing
these questions in advance. How do we ensure that AI
remains under our control? And that also means the many
under our understanding, and in neither of these are we
doing a good job, Which is why I am concerned,
(57:05):
Which is why I have said that my p doom
is about five twelves, which sounds like a strange number,
but I give the reasoning behind that in some of
the articles I've linked to in the chat. Now, as
I said, I think this discussion could easily go on
to twenty twenty seven, but I'm happy to Hearkeneddi's applies
to some of these points at least.
Speaker 1 (57:26):
Yes, thank you David for the analysis and the critique,
and I do hope that in many respects the conversation
will continue through twenty twenty seven, and there will be
various installments of the conversation. I will first of all
outline points of agreements so there can arise real problems
(57:51):
with humans using AIS or humans designing AIS for certain
tasks where there always needs to be not just a
human in the loop, in my view, but a responsible
and ethical human in the loop. So, for instance, I
think it would be highly ill advised and dangerous to
(58:13):
have any sorts of autonomous weapons systems that make the
decision to kill humans or destroy infrastructure without an explicit
human sign off, or even in a situation where there
is an explicit human sign off, but the humans have
learned to really offload that cognitive responsibility to the AIS,
(58:38):
and we've seen that in certain contexts already beginning to happen.
So I think whenever there is the possibility of an
adverse effect on a human being, and this could be
in the context of war, or it could be even
in a much more let's say mundane contacts, say financial decisions,
(59:06):
if there's a situation where somebody could be charged a
higher price, somebody could be denied alone, somebody could be
denied an opportunity of some sort solely on the basis
of an algorithm that gives rise to concerns, and that
gives rise to the need for ethical implementation and safeguards
(59:30):
by ethical humans. I think perhaps in my view, it
is easier to develop those safeguards than some who have
raised those concerns might suggest, because I think most of
those safeguards are a matter of will rather than of
(59:51):
us not inadvertently missing something. So if we want to
have a situation where ethical humans remain in control of
key decision, I think we could. It's a matter of
policy more so than anticipating every single possible contingency. Now,
it's interesting too that you do agree that inner experience,
(01:00:19):
the subjectivity that we have as humans, can't really be
produced by any AI system that currently exists or on
the horizon, and we don't quite know the criteria for
the emergence of that inner experience. I think it will
(01:00:39):
become increasingly apparent over the coming years that intelligence and
consciousness are not the same, and you can have an
intelligent system that is not conscious, which is what a
lot of the at least near term AI systems are
going to be. But it will be a very important
(01:01:00):
question when can consciousness potentially emerge even in systems that
don't have a biological substrate. And I think that question
is going to consume the attention of a lot of
philosophers in the coming decades. But in the meantime, we're
(01:01:21):
not close to that yet, and I think it's good
to have that as an area of agreement about what
the distinguishing human attributes are going to be. As for
the others, I think, well, extraordinary claims require extraordinary evidence
to invoke that common statement. So I am open to
(01:01:46):
the evidence. I am open to the demonstration of an
AI system that can exhibit these attributes that I think
humans excel at or sometimes even have exclusively at present.
And it could be a matter of degree. So as
(01:02:07):
with common sense, some AI models today can simulate it,
but humans are still better at it because we have
more inputs, both from our sensory experiences and other humans
to indicate what a common sense decision would be. Can
ais improve at that, I would say probably, But can
(01:02:28):
they ever improve to the point that they would surpass us.
That's where I have doubts, and I will also state
a more fundamental doubt. I'm not sure with the current
architecture of AI systems that we can get to an
artificial super intelligence in the way that is positive as
(01:02:52):
being necessary to reach the AI singularity. Because if you
give an a system all of the training data that
exists currently, so that's all of the knowledge of humankind
that has been accumulated and written down somewhere or somehow recorded,
will it still be smarter than the smartest human who
(01:03:19):
just has incredibly fast recall of information and higher processing capabilities.
So yes, that would be an impressive individual if such
a person existed. But can we really reach like a
qualitatively higher gradation of intelligence with today's AI architectures. That's
(01:03:44):
where I have my doubts. So, could these AI systems
be the equivalent of say, Stephen Hawkings or Albert Einstein's
among us again with much faster computing capabilities. Sure they could.
But Albert Einstein and Stephen Hawking didn't make the human
species obsolete. They just made valuable contributions to human knowledge. Unfortunately,
(01:04:07):
they weren't even able to alter the course of history
beneficially to the extent that they had hoped. So, yes,
it would be great to have these entities among us,
but I think the rest of us will still be necessary.
Speaker 6 (01:04:22):
One very quick follow up to that we both know
GNADI about a particular AI agent called Aubray which has
been trained on the findings, writings and ideas of Aubrey
de Gray, and Aubrey de Gray himself has said that
some of the responses of the bot are surprisingly intelligent,
coming up with ideas that it was not in his mind.
(01:04:42):
And otherwise this bot had all the same attributes of
aubrii degree or many of the same attributes, but had
access to more training information read more articles, and so
Aubrey already feels in some cases that his own research
may be accelerated by the assistance of this agent. And
what we need to get to the single liar simply
that the software engineers who are currently designing the next
(01:05:04):
generation of AI, that their work will go much faster
because of simulations of them and their working environments. So
the key point is when we have the companies like
open ai or whatever who are able to unleash one
generation of AI to help them design much more quickly
the next generation of AI, and once that's in place,
(01:05:25):
it will allow them to get much closer to general intelligence.
So that's the tipping point. Are we seeing signs that
it's coming leaving people like France Bis Shirley, who you
mentioned and who is a real gem. I saw him
give a talk in Seattle just over a year ago
at the previous AGI conference in which he did outline
the arc AGI challenge and he said, here are various
(01:05:49):
things that he didn't they think AIS wouldn't solve anytime soon.
Since then, he has gone on record and saying he
has revised down his timelines. I put in the chat
conversation between him and Warkish Patel. Dwarkish Patel is probably
the best podcast now on what's happening in AI. He
does it very well, and Francois emphasizes there that he's
(01:06:11):
reduced his timelines not to twenty twenty seven, he says
probably about five years out, so he is seeing AIS
solve some of the problems more quickly than he had
thought possible. There's the evidence that things are still making
significant progress.
Speaker 1 (01:06:29):
Yes, that is fascinating, Thank you, thank you.
Speaker 3 (01:06:33):
I think that we have to stop the discussions we
spend a lot of time, so thank you very much
again and Abbi