Episode Transcript
Available transcripts are automatically generated. Complete accuracy is not guaranteed.
(00:00):
What is artificial intelligence?
I mean, what is intelligence?
And are we going to be replacedor killed by super intelligent
robots in four years? In thisepisode, I want to answer some
of these questions and explainwhy I think we are quite far
from achieving ArtificialGeneral Intelligence, AGI or
super intelligence. Although Itouch on philosophy of mind,
(00:23):
cognitive science and computerscience a little bit, I did keep
it fairly high level so that ageneral audience could follow
it. As you'll hear some of themissing pieces and current
systems include a lack offactual grounding, a lack of
common sense, and a lack ofunderstanding of cause and
effect. Without the ability toreason using abstract concepts.
(00:45):
Today's AI systems cannotachieve human level
intelligence. So I'll get intoall of this and more in the
episode. I hope you find iteducational and interesting. And
by the way, I recorded this as aYouTube video originally, but
because it got so muchengagement, I thought I would
share it out to my podcast foryou too. So here's the episode.
(01:08):
Welcome to cosmic intelligence,a Podcast where we explore the
intersection of philosophy,cosmology, consciousness, and
emerging technologies likeartificial intelligence. If
you're new here, I'm Chad, aphilosopher, technologist,
product manager, yoga teacherand attorney based in Los
Angeles.
(01:52):
Artificial General Intelligenceor super intelligence is not
right around the corner like AIcompanies want you to believe.
And that's because intelligenceis really hard major AI
companies like open AI andanthropic as well as Ilya
Sutskever, his new company, havethe explicit goal of creating
artificial general intelligence.
And they claim to be very closeto doing this using technology
(02:14):
that doesn't seem capable ofgetting us there. So today, I
want to talk about intelligence,both human and artificial.
Before I jump in, let me justsay that I am optimistic about
the promise of technology. AndI'm also skeptical about the
outlandish claims that are beingmade right now about AI and
(02:34):
general intelligence. Now it'sit's been a while since I've
been an AI researcher, so I mayget some of the details wrong in
my explanation of the currentstate of AI. So if I do, please
feel free to comment on that orgive me feedback. Send me an
email. Do you ever think aboutintelligence because I sure do.
(02:57):
Merci. Merci. Merci.
matchy matchy matchy. Merci.
Merci. Merci.
(03:19):
What is artificial intelligence?
I mean, like, what isintelligence? Am I intelligence?
Is AGI or super intelligenceright around the corner? Are we
on the precipice of a technoutopia or some kind of
singularity? These are thequestions that I want to answer
today, in a hopefully, very nontechnical, but technically
accurate video, I want to try tooffer a layman's overview of why
(03:43):
I think we're so far away fromAGI and super intelligence, but
I also want to speak accurately.
So it's a delicate balance. Iwant to keep this interesting,
and I want it to be useful tothe broadest audience. So we'll
see how it goes. And just to beclear, I'm trying to avoid
talking about machine sentienceor consciousness. In this video,
(04:05):
I'm just focusing on cognitiveintelligence, or at least I'm
going to try to. So I want youto keep this question in mind as
we go. Why do smart people wantus to think that they've solved
intelligence when they're smartenough to know that they
haven't. So over promising andunder delivering is a sort of
artificial intelligencetradition. For example, in 1965,
(04:28):
ai pioneer Herb Simon said thatmachines will be capable within
20 years of doing any work thata man can do. And that would
have been in 1985, which clearlydidn't happen. Five years later,
in 1970, we see computerscientists and Marvin Minsky
promising that in from three toeight years, we will have a
machine with the generalintelligence of an average human
(04:50):
being didn't happen. Then, in2005, Ray Kurzweil predicted
that we would have the requisitehardware to emulate human
intelligence by the end of thatdecade and effective software
models of human intelligence bythe mid 2020s. Here we are. And
I think it's pretty clear thatwe don't have that yet.
Actually, here's the here's Raylast week standing by his
(05:11):
prediction that we will have AGIwithin the next five years in
1999. I made a prediction thatwe reach AGI Artificial General
Intelligence by 2029. Within 30years,
you think we're still on trackfor 2029? I
think that's probablypessimistic.
(05:33):
You keep pushing this up? Well,Elon
Musk says it's going to be twoyears. Other people saying three
or four years five years isprobably conservative,
Sundar Pichai, the CEO of Googleand alphabet said way back in
2018, Aiis one of the most important
things humanity is working on.
It's more profound, and I don'tknow electricity or fire.
(05:54):
This kind of bold statementmight be warranted if we were
close to any kind of trulyintelligent machine. And Elon
Musk, of course, has beenclaiming every year for the past
10 years that fully autonomousTesla's were coming within a
year, every year. And they'restill not here. Finally, earlier
this month, Ilya Sutskever, oneof the cofounders of open AI,
(06:14):
revealed his new startup safesuper intelligence Incorporated,
a startup with one goal and oneproduct, creating a safe and
powerful AGI system. He saysthat they won't release a
product until they achieve superintelligence. Although we may be
inching closer to solvingintelligence, it's still a ways
(06:35):
off. And I think we don't fullyunderstand intelligence. So
that's part of what I want totalk about today, you can see
why I'm skeptical. I mean,there's a reason to be skeptical
after so many decades of ofempty promises. And that's
without it without even gettinginto the details that I want to
talk about today. For those ofyou who are new to my channel,
in addition to being aphilosopher, and attorney, I'm
(06:57):
also a computer engineer and aformer AI researcher. So I do
understand this stuff, at acertain level, pretty deep
level. But it's been a whilesince I've actively worked in
the field. So again, if I getsome details wrong, please let
me know. I actually left thefield of AI back in the 90s.
Because I feel like it didn'tlive up to the promises and the
height back then it was the AIwinter as they called it. So it
(07:20):
seems like something new anddifferent is happening. Now it
feels that way. And so that'swhy I'm excited about what's
happening, but also want to talkabout it more. And to be clear,
I think we will eventually getto some kind of machine
intelligence that is impressive,and reliable and transparent,
and unbiased and all that. But Ijust don't think the current
approaches are what we need toget us there. Whatever the
(07:43):
potential of AI is, I want tooffer a an accurate, realistic
snapshot of the present moment.
So I want to use this video,this entire video series I'm
working on actually, primarilyto offer a more skeptical
perspective around all thegrandiose claims that are being
made right now, and tocounteract all the hype, or at
least offer some clarity aboutit. So Apple, as I'm sure you
(08:04):
saw a few weeks ago came outwith their Apple intelligence.
What's great about this, Ithink, is that it's a more
modest use of AI for aestheticpurposes and practical personal
uses. And I think it's the firstreal attempt outside of maybe
perplexity at grounding answersand responses in some kind of
actual reality. And what's niceabout their approach to it is
(08:26):
they have no aspirations tocreate some kind of super
intelligence. Okay, so what isartificial general intelligence?
Let's start with that. Onechallenge here is that there's
no one agreed upon definition,maybe you can say that it's
performing any task that a humancan, including tasks not
encountered before, it's prettygood. It's the it's the ability
(08:48):
to think and act intelligentlyin any domain like we like we
humans can. But a crucial partof it is the ability to adapt
and learn, which is something Iwant to come back to you later.
Okay, so that's sort of a roughapproximation of AGI. Actually,
the artist James bridle arguesthat AGI is already here in the
form of corporations and what hecalls corporate AI or corporate
(09:12):
intelligence. And I think that'sa it's a pretty interesting
perspective. Here's, here'sJames, but the
answer I quite often give whenpeople ask me when is when is
real AI going to turn up? I sayreally, AI is already here. It's
corporations. Corporations arelarge assemblages of people and
machinery and infrastructure,connected to networks. They
(09:35):
respond to pleasure in pain,pleasure is profit and loss is
pain. They operate according tovery narrow views about what it
is that they should be doing.
They stick to these very narrowgoals, you know, almost
regardless of anything else thathappens. They have free speech.
They have legal standing, theyhave legal personhood. They have
power in courtrooms very hard tokill. Although not completely
(09:58):
impossible to do. So there'sthis, they are essentially life
forms that we've created, wherewe realized that having heard
these kinds of organisms livingamongst us, it's almost like
living in the aftermath of akind of alien invasion. And
the bridle elaborates in hisbook ways of being. That's what
(10:20):
happens, it would seem, when thedevelopment of AI is led
primarily by venture fundedtechnology companies, the
definition of intelligence,which is framed, endorsed and
ultimately constructed inmachines is a profit seeking
extractive one. And he makes animportant point, a point is
similar to the one that EzraKlein made on his podcast a few
(10:42):
months ago, that the way wethink about AI and intelligence
has largely been defined by itsemergence inside of capitalism.
And I would add to that, thatits emergence out of government
and military funding is alsoheavily influential on the way
it's kind of being thought aboutand talked about. I want to
offer one interesting littleside note, too, I was reflecting
(11:04):
on how today's AI systems kindof reflect our culture and our
society the moment. I mean,isn't it interesting that if we
look around in our society,there's a general lack of
grounding and factual reality orindependent verification. And
nobody's working with this,nobody's working with the same
set of facts. So it'sfascinating that our AI systems
(11:25):
kind of reflect that, like theyhallucinate and they don't
really have factual reliability.
So that's interesting, I think.
I mean, it's also it alsoreminds me that we live in this
cultural age of nostalgia andpastiche. And in a way, our AI
systems are essentially enormousmashup machines. So it's just I
guess what I'm trying to sayhere is to paraphrase Marshall
(11:46):
McLuhan, we shape ourtechnology, and it in turn
shapes us. So there's thisfeedback loop going on. And I'm
just fascinated by the way theAI seems to reflect the state of
our world right now. So that'sinteresting. Okay, so let's try
to define intelligence, evenmore even more challenging, as
we'll find out. If we'reattempting to recreate human
intelligence, and then surpassit. What does that look like?
(12:08):
What like, what does itintelligence mean? Exactly? It
seems obvious at first blush,but when you really dig into it
gets more slippery. And if youlook at technology leaders, and
especially transhumanists, theyseem to be obsessed with
intelligence at the expense ofall other attributes and
facilities that life andespecially humans exhibit. You
(12:29):
can even say that our entiremodern society is obsessed with
intelligence, as littleintelligence as we seem to be
exhibiting and collectively atthe moment. But yeah, I mean, if
you if you think back on societyand government structures, you
could argue that our governmentstoday in the west at least, are
largely a meritocracy, or atleast aspirationally. And you
(12:51):
know, IQ tests and standardizedtesting determine your fate and
life, at least if you ignoreinherited wealth and privilege.
So in our society, I thinkintelligence is over emphasized
and overvalued. And I think thatis kind of feeding the AI hype,
and it's influencing ourthinking about intelligence.
(13:12):
Intelligence is hard. Definingis hard. There's a lot more
going on with intelligence thanAI researchers and developers
acknowledge, you know,intelligence, I think is
actually quite mysterious. Andnot only that, but uh, you know,
I think very few AI developersare also trained in cognitive
science or philosophy of mind.
And this persistent yet illfitting metaphor of the mind as
(13:34):
a machine is a very misleadingmetaphor, I think. I mean, is
the mind algorithmic? I don'tknow that it is. Early AI
researchers certainly thought itwas. And, you know, they really
that's just really influencedthe development of AI ever since
actually touched on this lastyear in my video about quantum
consciousness. So if you'recurious about that, go check
that out. Defining intelligenceas mere computation means that
(13:56):
we reject a much richerunderstanding of mind and
consciousness. And I want toexplore that particular topic in
more depth in a future video,but let's actually try to define
intelligence. It's obviouslyreasoning, analysis, logic,
pretty clear that that's allintelligence, learning, the
ability to learn the ability toplan. That's intelligence,
(14:19):
obviously, nothing controversialso far. In fact, one dictionary
definition I like isintelligence is the ability to
acquire and apply knowledge andskills, very straightforward.
But then does Intelligenceinclude intuition, discernment,
probably, but then wisdom isseems separate, but perhaps is
on a continuum. And then whatabout like creativity? Actually,
(14:43):
Ray Kurzweil has a pretty broaddefinition of intelligence in
his in his books about AI. Andhe says that he human
intelligence is diverse andsubtle, which is a nice
acknowledgement. And for himintelligence includes artistic
aptitude and creativity. He evensays emotional intelligence as
(15:04):
part of his definitiondefinition of intelligence,
which is interesting. We'll comeback to that. But does
Intelligence require selfawareness? Or metacognition, as
they call it? Does it? Does itrequire sentience and
consciousness? I don't know. Youknow, as as anthropocentric, as
(15:24):
it is one definition ofintelligence that I really like,
is it's whatever humans can do.
But then that includes all thefaculties that I just mentioned,
including, including creativityand imagination and intuition.
So it's a little hard to define.
So another answer to thequestion of what is intelligence
is like, nobody really knows.
Right? Or we think we know butit's, it's slippery. And I think
(15:49):
it's possible that this machineintelligence that we're creating
is like a wholly new thing. It'slike a categorically new thing.
It's not ever going to be quitehuman intelligence, it might be
its own intelligence in the sameway that like an octopus with
its multiple brains anddifferent parts of his body has
its own kind of intelligence.
That's not human intelligence,but it's certainly some kind of
(16:09):
intelligence, right. So I thinkmachine intelligence is possibly
going to be some kind of souI,generous, holy, new thing that
we're creating. And actually,Alan Turing, the infamous
computer scientist, who was kindof like the godfather of AI in
some way, he recognized back,you know, 70 years ago that
machine intelligence wouldprobably be different in kind
than human intelligence. Hethought that artificial
(16:32):
intelligence might very well bea wholly new thing that is not
analogous to any form ofintelligence that we know today.
And for that reason, I think, AImay never supplant humans, it
might forever just kind ofaugment us and be a
collaborator, which is moreexciting. I think. The point I'm
making is that understandingintelligence is important. If
(16:52):
we're going to create an AIsystem, we need to understand
exactly what we're trying tocreate, if we're going to create
it right, it seems prettyobvious. But you know, I don't
know that our understanding ofintelligence and how it works is
reflected in our current systemstoday. And if we don't
understand what we're trying tocreate, I think what we create
(17:13):
is going to end up beingmysterious and unpredictable,
biased, misaligned, and kind ofopaque, even alien and maybe a
little terrifying. I mean, wesee this already, right, maybe
you feel this way towardschanging beauty, or Sora. So
having said all that, is theresuch a thing as general
(17:33):
intelligence, it's unclear. Inany case, however, we define
intelligence, there is stillsome use, I think, in creating
systems that attempt toreplicate even a narrow kind of
intelligence. And there'sdefinitely a big opportunity
here. I'm not again, I'm notopposed to making these things,
I just think we should be morethoughtful and deliberate about
(17:53):
what it is we're making, andwhat's driving all that. Okay,
so let's talk about the brainand the mind. In our modern
times, the word brain and mindare used interchangeably. The
brain is obviously that chunk ofstuff in your skull, the gray
matter. And the predominant viewof mind is that it arises from
(18:15):
the firing of neurons in thebrain, the mind as well as our
consciousness, essentially,emerges as an epiphenomenon of
the brain. This hasn't beenscientifically proven, actually.
But it is a long standingassumption. And it is the
prevailing view. So this is, youknow, I'll explore that
assumption in the future video.
(18:37):
But this is the kind of viewthat is driving and has been
driving AI development. So forthis reason, computer scientists
in the 1940s and 50s, suggestedthat we might model this neural
network in the human brain inorder to achieve human level
intelligence and maybe evenbeyond. The neuroscientist Henry
(18:57):
Markram actually announced backin 2009, at a tech conference
that he was going to simulatethe entirety of the human brain
and a computer within thedecade.
So they literally are only ahandful of equations that you
need to simulate the activity ofthe neocortex. But what you're
doing is a very big computer.
And in fact, you need one laptopto do all the calculations just
(19:18):
for one neuron. So you need10,000 laptops. So where do you
go, you go to IBM, and you get asupercomputer because they know
how to take 10,000 laptops andput it into the size of the
refrigerator. So now we havethis Bluejeans supercomputer, we
can load up all the neurons ontoeach one onto its processor and
fire it up and see what happenstake the magic carpet for a
(19:44):
ride. Sadly, he never finishedhis project. If we gaze back
over the history of philosophyof mind and cognitive science,
there are basically two types ofreasoning acknowledged and
studied. There is deductive andinductive inference. Deductive
inference is where you applygeneral principles to particular
(20:08):
instances. For example, all menare mortal. Socrates is a man.
Therefore, Socrates is mortal.
Again, with deductive reasoning,we're going from the general to
the specific. If it's raining,then the ground is wet, you see
that it's raining, therefore,the ground is wet. Although it
may not be because there's atree or a gazebo, or a car,
(20:31):
covering the ground in question.
Anyways, this was the firstapproach to AI which are known
as expert systems or classicalAI. And the idea with this was
that if if we could feed likemillions upon millions of these
kind of principles are truestatements into a machine, boom,
(20:51):
we get intelligence. But as youcan see, with a rain example,
it's brittle and can't reallylearn or adapt. So if you just
have deduction, you know, you'reit's, it's not going to really
be able to handle the very fuzzyand messy world, right? It's
also not well suited to learningand adapting deductive
conclusions have to assume thatthe premises are true and valid
(21:13):
for all time and all situations.
It also suffers from relevancechallenges, like when do you
apply a particular truism to agiven situation. So we can
already see that intelligencehas layers that there's like
steps that happen before forexample, deduction. So we'll
come back to this in a moment,among other things, general
principles that are used, andthese deductive statements
(21:35):
require testing in thelaboratory of life. The
principles rattle rattlingaround in your head are
constantly subject to revision,or at least I hope they are. And
if you know, for example, if yougrew up in the northern
hemisphere, you may have beentold or learned that all swans
are white. But then you travelto the southern hemisphere, and
you see a swan that is brown.
So, you know, these, thesethings are always subject to
(21:57):
revision. And this actuallybrings us to the other kind of
inference that we are prettywell aware of, which is
inductive inference. And this iswhere you arrive at general
conclusions from particularobservations. So you're
identifying patterns fromexamining data, observing the
world and performingexperiments. So to go back to
swans, if you see white swanAfter White Swan, you probably
(22:21):
would conclude that also allswans are white. And this is how
machine learning works. Youknow, they study millions of
photos labeled cat, and theystart to have a general sense
for what constitutes Katniss orat least mathematically they do.
And so the basis for machinelearning systems today is this
inductive reasoning. They'retrained on millions of samples
(22:41):
to learn what for example, a cator a dog or a human is. The
challenge with inductivereasoning, is that unlike
deduction, inductive knowledgeis always provisional, right,
the world always changes. Andthe future doesn't always look
like the past. It also doesn'trequire knowledge of cause and
effect. So these systems thatlearn from induction or machine
(23:03):
learning, are inherentlybrittle, brittle as well, they
can't adapt, and they can'thandle a world that is complex
and messy. And so there's alsothis kind of longtail problem in
inductive reasoning, or machinelearning where a specific
situation that was not in thetraining data is going to trip
up the machine. So in thosecases, the machine is unable to
(23:25):
reason towards a response, atleast not with induction alone.
So essentially, machine learningsystems on their own can't
extrapolate from priorexperience. For example, an
autonomous driving systemtrained in the United States
would face numerous challenges.
If it was deployed in India, itwould encounter cows and the
road and it would respondunpredictably. The traffic signs
(23:47):
would obviously be different.
And even putting language aside,there's gonna be Tuk Tuks and
rickshaws and scooters with fourpeople on them. And, you know,
not to mention that in mostplaces in India that I've been
to at least, traffic rules arenot really followed. So you
know, it's it's kind ofenvironment where the training
(24:09):
data will not be helpful to thenew situations. So it seems like
we almost need another type ofthinking, something that binds
deduction and inductiontogether. And as we will see,
this is known as abductiveinference. We'll explore that
after we look at how current AIsystems work in a little more
detail. So current current AIsystems or machine learning
(24:30):
systems are kind of animprovement on the classical AI
systems that I mentionedearlier. You recall, the
classical AI systems weresimilar to deduction, right
where they had knowledge kind ofhard coded and these true
statements and Decision treesand but you know, going back 70
years, there were some attemptsat modeling the brain, as I
mentioned, through neuralnetworks, and so that's kind of
(24:54):
experienced a resurgence. Goingback maybe 15 years, people
started to have enough datacompute power to try to model
these things more realistically.
And so today AI systems thathave captured the popular
imagination and tragic beautyand all that are for the most
part based on these neuralnetworks on this theory of
cognition that posits thatintelligence and the mind arise
(25:16):
from the firing of neurons inthe brain. So in short, they're
an attempt to model the brain.
So that includes chat GPT,anthropic Claude and Google's
Gemini, not to mention the imagegenerators and all the other
generative AI. with companieslike Apple adding neural engines
to their chips, we can see alasting commitment to this
machine learning approach, thesemachine learning systems are
(25:38):
again a form of inductivereasoning. And they have finally
realized some success afterdecades of failure because of a
combination of access to largerdatasets and specialized
computer technology in the formof GPUs. So boiling it down. The
way it works is that there'sthese tokens that are basically
representing words or part ofwords that are assigned a
(26:00):
numerical score. And thensentences or images are parsed
and generated using these kindof large mathematical matrices
of probability distributionsthat pass through multiple
nodes. And these neuralnetworks, that's probably the
most concise like description ofthe way that these machine
(26:22):
learning systems work today, itdefinitely glosses over a lot of
detail. But that's essentiallyit's essentially using math to
generate text and images. And soit's, it's just assigning
numerical values to differentpieces of information. This kind
of system is very widely usedand has been for some number of
years. For example, Facebookuses machine learning to decide
(26:43):
what to show you in your feed.
Same thing with Instagram.
Google uses it to recommendYouTube videos, maybe even this
one. These companies use machinelearning to translate between
languages to show you ads adtargeting any any kind of text
to speech or speech to textsystem is using machine
learning. And you could describemachine learning as a bottom up
(27:05):
approach. It's a bottom upapproach to intelligence where
information is gathered fromobservation, as opposed to top
down intelligence, whichincorporates prior knowledge
about the world. Again, it's aninductive as opposed to
deductive. Okay, so now we getto the heart of the matter, I
want to talk about what'smissing from current systems
that I think you need to get toAGI. Again, I'm not talking
(27:26):
about fairness, transparency,and other safety issues. I'm
focused entirely on this in thisvideo on the pieces that are
missing if we're going to createtrue intelligence in a machine.
Those other issues, bias andtransparency and fairness and
all that are also important. Butthat's a different topic. And
this is more just focused on thequestion of general
intelligence. So the first thingthat is probably obvious to
(27:48):
anybody right now is thatfactual accuracy is a real
challenge with these systems. AIsystems currently have
persistent and seeminglyintractable challenges with
facts, including organizingfacts and understanding
relationship between facts. Andthis is because the way they're
designed machine learning isn'treally geared towards
representing precise factualknowledge. It's more of a like,
(28:13):
a mash up machine or like aamateur poet, you know what I
mean? So, you know, we saw thisrecently, with Google telling
people to eat rocks, and to putglue on their pizza briefly,
there have been some attemptsrecently to address this factual
issue. This includes somethingcalled retrieval augmentation
generation or rag. And I'm gonnago into the details of it. But
(28:34):
basically, the idea is that youkind of bolt bolt onto your
machine learning system, somesome reliable source of factual
knowledge. And then that theidea there is that it will then
allow the LLM or the machinelearning system to to be more
reliable. But the challenge withthis is that it's just basically
(28:54):
kind of kicking the can down theroad aways in terms of achieving
general intelligence, you wouldstill have to create a reliable
system that could decide foritself, what sources to trust.
And you would ideally not wantto have to hard code that every
time you enter a new knowledgedomain. One way to summarize
what's missing from currentsystems is that they don't have
common sense. To quote GaryMarcus, the computer scientist,
(29:19):
the great irony of common sense,and indeed AI itself, is that it
is stuff that everybody knowsyet nobody seems to know what
exactly it is, or how to buildmachines that have it.
This is because the systems haveno working model of the world
internally. They only know whatwords tend to follow other
(29:39):
words, or what sort of pieces ofimages tend to follow other
pieces of images, that kind ofthing. And so part of common
sense is like understanding theworld and understanding cause
and effect. understanding causeand effect is arguably the most
important missing piece. Andit's deeply related to basic
reasoning that goes beyond Inmere induction, maybe we learn
(30:02):
it in infancy. And maybe amachine learning system could
learn us eventually through alarge enough training set. But
that seems unlikely. And I'mskeptical of that. As for
humans, to echo, Immanuel Kant,we seem to have a natural
intuitive sense for time, spaceand causality. And I have
(30:22):
thoughts about why that mightbe. But I'll, I'll defer that to
another video as well. I feellike I'm always promising things
in future videos. But that'sjust the way it is to try to
keep these things relativelyshort. And I know this video is
going to be pretty long. Anotherway to describe this kind of
missing piece, this common sensepiece, is to point to the fact
that machine learning systemscan't handle abstract concepts.
(30:46):
So they don't, they're not ableto sort of think in abstraction,
at the level of language. Thisis semantics, the meaning of
words and how the words relateto the world, and to each other.
Systems like Chad GPT are justmanipulating symbols, but
without any understanding oflike the meaning the actual
meaning the semantic meaning ofthose symbols, or what the what
(31:08):
those sentences are actuallydescribing the world. And so
this is related to symbolicrepresentation. If you remember
the discussion of classical AI,and deductive reasoning from
earlier, machine learningsystems don't have this kind of
map of symbols and categories ofsymbols and relationships, and
(31:28):
domain trees and all that. Theywould that they would need to
really solve this. So forexample, you could have a
medical diagnosis system that'sbased entirely on machine
learning, and induction. And itwould probably recognize flu
symptoms based on its trainingdata, maybe even like 99% of the
time, right. But what about moreobscure illnesses? Or what if
(31:50):
there was a Reddit post thatjokingly talked about using
edible marijuana as a cure allfor the flu or things like that
you want to you want to have asystem that's going to think
these things through. So it'sfine for a medical diagnosis
system to learn about variousdiseases and treatments by
studying medical textbooks. Butthen we want a system to develop
(32:11):
a related set of abstractconcepts that it then reasons
with, rather than simply patternmatching words and phrases. I
mean, do you see what I'mgetting at here, we want these
systems to have some level ofunderstanding and capacity for
contemplation. Essentially, AIsystems need abstract causal
knowledge, top down knowledge.
In addition to the ability tolearn via inductive reasoning,
(32:34):
this might be solved bycombining top down and bottom up
approaches by combiningdeduction and induction. And
some AI researchers are arealready exploring this
possibility. Another way ofdescribing this piece is to say
that AI systems need thecapacity to develop theories
about the world theories thatcan be revised based on
experience and counterfactuals.
(32:58):
On testing, essentially, toquote Gary Marcus and artists
Davis, no fact is an island tosucceed, a general intelligence
will need to embed the fact thatit acquires into richer
overarching theories that canhelp organize those facts. We
saw this with large languagemodels, they don't understand
anything, they're saying they'rejust stochastic parents, as
(33:20):
people say, kind of dumbmachines. In short, we want a
thinking machine that isdeveloping an ongoing model of
the world that is full ofinterconnected concepts with
which it can reason we humans dothis all the time, we associate
a new piece of information or anew concept with prior concepts,
and memories through associationand analogy. Oh, this reminds me
(33:42):
of the other thing, that kind ofthing. Sometimes those are
things that happen to us andeven contain a feeling and a
sensation. I mean, I don't thinkemotion or embodiment are
necessary for true intelligence.
But there is also more toexplore there. I think, again,
in a future video, the samething goes for desire, right?
Like I think there's somethingmaybe going on between desire
(34:03):
and intelligence. We, for us,humans, our model of the world
is shaped in part by our desiresand motivations. And our desires
are in turn shaped by our modelof the world. The contour of the
world of possibilities is shapedby our ideas about what's
possible. I think there's a lotmore to explore around desire
and understanding. But again,future video, a larger point
(34:25):
that I want to make is thattechnology is always downstream
of theory. Let me say thatagain, technology is downstream
of theory. Computer ScientistErik Larson reminds us of this
in his wonderful book, The Mythof AI. As we can see, by now,
the current approach toachieving general intelligence
(34:45):
is not built on any viable,robust theory of intelligence.
Instead, researchers just keeppiling on more data and faster
processors with a kind of blindfaith. if that machine learning
is all we need, I guess, basedon the idea that humans with
their neural networks, you know,that seems to be all we needed.
(35:08):
I guess that's kind of the idea.
But I don't think there's areally a one to one
correspondence there. And I'veexplained that a little bit in
this video, but I think there'sa lot more to unpack there to in
a different video. Anyways,there's this notion that we
humans are blank slates, andthere should be no need for
prior knowledge, hard coding,all that stuff. So they've
(35:28):
created these systems that areimpressive at a very narrow set
of tasks. And they do an okayjob creating the illusion of
intelligence. But perhaps thereis one theory of mind that would
address most of these missingpieces. That's abductive
reasoning.
abductive reasoning is a lesserknown form of inference that
(35:50):
hasn't been studied asextensively and really only came
to prominence 100 years ago withthe philosopher Charles Sanders
purse abductive inference isforming hypotheses based on
observation, experience, andcommon sense. It's essentially
making educated guesses is knownas inference to the best
explanation. abductive inferenceis essentially making guesses or
(36:13):
hypotheses that you thenconfirmed through deduction,
investigation, experimentation,etc, etc. For example, you come
home, and the living room lightwas on when you arrived. But
you're confident that you turnedit off when you left, then you
form a hypothesis that someonemust have been in your house
while you were away. Might notbe true could have been a smart
(36:33):
light system or something else.
But this is the kind of exampleof talking about, essentially
the kind of thinking weassociate with invention and
detective work and scientificdiscovery. It's like, it's
flashes of insight. It's thosekind of eureka moments where you
may have woken up from a nap orin the shower, or daydreaming,
and suddenly, something comes toyou like the solution to a long
(36:55):
standing problem just pops intoyour head. One of the most
famous examples is a ghost pickwho days Daydream reverie about
the snake eating its own tail,but inspired his insight into
the structure of the benzenemolecule and the fact that it's
a ring structure. That kind ofthing is an example of abductive
reasoning. So in contrast withinduction, inductive reasoning
(37:19):
moves from facts togeneralizations that provide a
theory of regularity. That canbe disproven. In contrast,
abduction moves from specificobservation to a hypothesis that
might explain it. It's a way ofmoving between modes of thinking
that connects deduction andinduction with flashes of
insight. It's kind of the gluethat holds deduction and
(37:41):
induction together, inductionbasically requires abduction as
a first step. This is why Ithink getting machine learning
systems going on a new datasetrequires human supervision in
the beginning, we bring ourgeneral knowledge of the world
to bear on the problem or on thetraining set. For example, when
training AlphaGo, the GoogleDeepMind team used human
(38:01):
inferences abductive inferencesin other words, to design a
system to successfullyaccomplish tasks. They supplied
basically, like wisdom about gofrom the outside for the
inductive framework they werebuilding. I think the best
example of deductive reasoningis the Sherlock Holmes stories,
right? In fact, there's a greatquote from the fate of the
(38:25):
Evangelion where Sherlock Holmessays, when you have eliminated
the impossible, whateverremains, however improbable,
must be the truth. It's prettygood definition of abduction.
And we see with Sherlock Holmesor Colombo or any kind of
archetypal detective, thatthey're drawing on an
understanding of human natureand behavior and past
(38:48):
experience, and a generalknowledge of the world to
identify patterns andconnections between the scant
list of facts to arrive at ahypothesis that can then be
confidently tested throughobservation. So abduction,
unlike deduction, or inductionis not necessarily linear
thinking. There are often thesekind of mysterious unconscious
(39:09):
leaps, it often manifests askind of a spontaneous insight or
a flash of insight. You know,sometimes, it's conscious,
sometimes it's automatic. Andalthough it does require
deduction and induction, I thinkit points to this more kind of
mysterious phenomenon in ourthinking, where hypotheses and
solutions spontaneously emergeas a result of spending a lot of
(39:34):
time with a problem and lettingit kind of percolate in our
maybe our subconscious orsomething. These ideas these new
solutions or new thoughts arrivelike fully formed in your mind,
right? Like an inventor ordetective or a medical doctor
when diagnosing an unusual setof symptoms. So like, what is
(39:54):
going on with abductivereasoning? We don't we don't
really know have and this makesit hard to model in code. I
mean, perhaps it's somethingquite deep a pointer to the
nature of mind andconsciousness. But, again, that
kind of question is betterexplored in another video. In
any case, this type of reasoninggets to the heart of what it
(40:16):
means to be human, I think andto think adaptively. And
creatively. The bottom line is,with abduction, an AI system
could reason with incomplete orinconsistent information. Again,
as we've seen, current AIsystems only know what they've
seen in their training sets. Sobecause induction is always
(40:37):
brittle, things are alwayssubject to change or exception.
Abduction is necessary toprovide explanations in
particular instances and toadjust the inductive knowledge
set to hypothesize about thenature of the outlier and to
revise internal models. So themoral here is that systems AI
systems need to be adaptable.
(40:59):
And we need to address all ofthe missing pieces that I just
listed. We want these systems tobe able to handle imprecision,
vagueness and fuzziness in thereal world. I mean, that's part
of what it is to be intelligent.
There has been some academicresearch into addressing
abduction. But it seems thatthese AI researchers are
exploring the idea and testingit using large language models.
(41:21):
And so I don't know if that'sgoing to work because using
basically induction to achieveabduction doesn't seem like the
application of theory properly.
So again, I don't think trueintelligence is possible without
not only addressing thesemissing pieces, but solving
abduction, but I guess time willtell. I did ask the three most
(41:43):
popular chat bots, whether theyare adept at abductive
reasoning, I asked them, are youable to perform abductive
reasoning? Here's what Chad GPTsaid,
Yes, I can perform a form ofabductive reasoning to some
extent, I can generatehypotheses or explanations based
on the information you providein the knowledge I have up to my
(42:04):
last training data in September2021. However, please note that
my ability to engage inabductive reasoning is limited
to the patterns and informationpresent in that training data.
For complex or specializedproblems. It's always a good
idea to consult with domainspecific experts who have access
to the most current informationand research, you see
(42:24):
how it can only reason when apattern matches on the same
patterns. And its training data,Google's Gemini said essentially
the same thing. And then when Iasked Claude Claude was more
interesting and honest, it wasthe only one that really
admitted to not having anyabductive capabilities. I don't
have specific capabilities forabductive reasoning, but I can
(42:46):
try to explain the concept.
abductive reasoning involvesmaking inferences to the best
possible explanations based onlimited or incomplete
information. It typically startswith an observation or set of
observations, then seeks to findthe simplest and most likely
conclusions that could explainthose observations. While I
don't have explicit programmingfor abductive reasoning, some of
(43:07):
my machine learning capabilitiesallow me to make logical
inferences and provide likelyexplanations for limited
information. But I don't havethe same flexible abductive
reasoning capabilities thathumans possess. My reasoning is
limited to what my training dataand algorithms enable. Another
challengewith machine learning is that
it's greedy and requires massiveamounts of data systems like
(43:29):
Chad GBT are voracious, andthey're already running out of
training sets. And that's whythey're creating like synthetic
data, which seems crazy anddoesn't seem like it's the right
path to to follow. To solvethat. There have been numerous
stories in the news recently ofmajor AI companies like running
out of data. So those are justsome of the missions that I
wanted to mention. I'm surethere's other missing pieces
(43:52):
too. And I wanted to keep thisdiscussion kind of limited in
scope to whether or not there'sany kind of intelligence there.
But yeah, there's otherimportant issues like bias and
lack of fairness,misinformation, deep fakes,
alignment, more generally,environmental impact of running
all these systems in the cloud.
(44:12):
And then, of course, the impacton employment and the economy.
But you know, I offer all this,to underscore the point that
intelligence is really hard andkind of mysterious. Another way
to summarize, what's missingfrom today's systems is trust.
As Gary Marcus and Ernest Davishave said, the core problem is
trust. Current systems can't betrusted with anything important.
(44:33):
We've seen this with Google'sexperimental AI search overviews
with the glue and the pizza andthe rocks and everything. So in
short, what's missing today isbasically broad intelligence
that we can trust. Another wayof summarizing is to say that we
don't have top down intelligencethe shortcomings of machine
learning is that it is bottomup, and the knowledge is only
(44:56):
gathered from observation. Butintelligence also requires top
down reasoning based on priorknowledge and background
knowledge about the world GoogleSearch has has been combining
machine learning with symbolicmanipulation for years. And so
it's kind of a mystery to me asto why their AI search results
are so unreliable. So how are wegoing to get to artificial
general intelligence, solvingfor common sense and true
(45:19):
reasoning will be a realrevolution in artificial
intelligence, no doubt, but Idon't really see how we can get
there with current systems. Soagain, why is there so much hype
right now, in light of all thesemissing pieces? Why are AI
developers so confident that AGIis around the corner? I mean,
they should understandeverything that I've been
saying. And I think, you know,it's a combination of cynical
(45:42):
puffery and a form ofgroupthink. It benefits these
companies for people to thinkthat we're almost there. And few
people in the industry have anyincentive to question these
theories underlying the currentapproach to intelligence. So if
no AI company is pointing tosome novel approach beyond
(46:02):
machine learning and largelanguage models, I don't see how
we're going to reach AGI by2029. As Kurzweil predicts,
solving for common sense andtrue reasoning will be the key,
we need a more robust theory ofmind. In fact, open AI has
acknowledged these shortcomings.
In essence, Sam Altman, the CEOof open AI, was quoted as saying
that he thinks that we've kindof reached the end of the road
(46:24):
with current approaches andlarge language models, he says
that these models have reachedtheir limit, and that any future
advances will will require newideas, that more progress will
not come from making the modelsbigger, and that will make them
better in other ways, but hedoesn't really say how exactly,
anyways, his statement suggeststhe GPT for the latest model
(46:46):
from open AI could be the lastmajor advance to emerge from the
company's strategy of makingmodels bigger and feeding them
more data. In fact, MiraMarathi, the open AI CTO, this
month made a similar commentinside
the labs, we have this capablemodels. And, you know, they're
(47:09):
not that far ahead from what thepublic has access to, for free.
So open AI has not given anyindication of the kinds of
alternative research thatthey're using to create a new
approach. If they aren't evendoing that. So maybe they are
and we'll we'll find out. Okay,to wrap up, people like Kurzweil
(47:30):
and those in the orbit of openAI, are very confident that AGI
is right around the corner. Youknow, feeding this kind of hype
is obviously in these companiesinterests. But it also creates a
kind of group thing and thiskind of overblown exuberance
that is, I think, detached fromreality. Maybe we need a new
generation of AI researchers toquote OD on Darwish, the chair
(47:55):
of UCLA is computer scienceprogram back in 2019. We need a
new generation of AI researcherswho are well versed in and
appreciate classical AI, machinelearning and computer science
more broadly, while also beinginformed about AI history. AI
researchers must draw not onlyon the many contributions of
(48:16):
computer science, oftenforgotten in today's enthusiasm
for big data, but also on a widerange of other disciplines, to
from psychology, to linguistics,to neuroscience. And I would add
philosophy to that list as well.
So if the goal is to create truegeneral intelligence, it needs
to be able to behave in waysthat are helpful, adaptable and
(48:39):
robust, it needs to be reliable.
So if and when we get there, wemay find again that we have
created a wholly new form ofintelligence that is not at all
like human intelligence. So I'mskeptically optimistic. Again,
at the end of the day, I am atechnologist who and a futurist
who loves to think about thesethings, and they get excited
(49:01):
about technological advancement.
But I am skeptical of theoutlandish claims that are being
made right now about AGI andthat kind of thing. I think I
think we, the general publicdeserves the truth about today's
technology. And hey, maybe thereare developers and AI
researchers in a lab somewhereworking on abduction or symbolic
representation, or somehowsolving common sense. I don't
(49:24):
know I hope so. In this video,we haven't even touched on what
makes us uniquely human, whichis, I think, goes beyond
intelligence and includesimagination, intuition,
emotional intelligence,embodiment, and wisdom. In my
next video, I want to dive intothe ideologies that are driving
(49:45):
AI development in Silicon Valleytoday, and that includes
transhumanism cyborg ism andthese other kind of tech Gnostic
ideologies that are coming outof Silicon Valley. Again, I'm
just sure I got some technicaldetails wrong or glossed over
something important. So pleasefeel free to correct me in the
comments or send me an email orsomething. If you made it this
(50:07):
far in the video, I want tothank you so much for sticking
with me. I hope it was helpful.
I hope you enjoyed it and I hopeit has enlightened us