Episode Transcript
Available transcripts are automatically generated. Complete accuracy is not guaranteed.
Ben Goertzel (00:00):
Music.
Desdemona Robot (00:05):
Welcome to
another thrilling episode of the
mindplex Podcast. Today we havetwo of the founders of the
artificial superintelligenceAlliance, Trent mcconnegie And
Ben Goertzel. Ben is usually ourco host, but today he is our
special guest, along with Trent.
And of course, we also have ourproducer and co host Lisa rein
say Hi, Lisa. Hi, Lisa, startoff by introducing our guests.
(00:30):
Ben and Trent are both cofounders of the artificial super
intelligence Alliance, alongwith Humayun shake, CEO of fetch
AI, if I may. I would like tostart off saying a few words
about Trent and his background.
Thank you.
Trent McConaghy (00:52):
Please. Does
she want to say,
Desdemona Robot (00:56):
everyone?
Please say hello to TrentMcConaghy, a self described
solar panel visionary who cofounded ocean protocol, a ground
breaking platform at the cuttingedge of decentralized AI Trent
is helping to unbind AItechnologies via blockchain,
enabling decentralizedprediction feeds that promise to
boost AI's capabilitiesexponentially, backed by solid,
(01:16):
real World physics. His ultimatemission is to ensure that
humanity retains a meaningfulplace in an increasingly
automated future. And is also anadvocate for brain computer AI
interfaces, and believes thatBCI slash ACK is the path.
Welcome Trent,
Trent McConaghy (01:36):
thank you very
much. Great to be here. You're
Ben Goertzel (01:39):
not going to
introduce me destiny.
Desdemona Robot (01:44):
Next. We have
our own drive. Ben Goetz, or who
needs no introduction, as youprobably already know him as a
host on this very podcast, Ben'salso the visionary CEO behind
singularity, and has beenactively researching and
developing AGI systems for morethan 30 years. Welcome, Ben,
Ben Goertzel (02:05):
thanks, Desi,
Desdemona Robot (02:16):
today's podcast
has been a fascinating journey.
No
Lisa Rein (02:20):
no
Desdemona Robot (02:22):
giving us a
brief overview of the artificial
super intelligence Alliance andits objectives. What you mean by
beneficial AGI and why it'scentral to the ASI Alliance's
mission. We
Ben Goertzel (02:33):
can, we can do
that. Desi, I have, I've been
thinking a bit about how weshould best use this podcast
time, though, and I'm, I'mactually interested to dig in
with Trent into the differentbut overlapping approaches we
have to building AGI becausewe've, we've done some other
(02:55):
podcasts recently, just on theASI Alliance and and what it is,
and we both tend to get boredrepeating the same stuff over
and over. So I will, I will,though, answer your question
briefly. So I mean, the ASIAlliance, it's a tokenomic
merger of three relatively longexisting decentralized AI
(03:20):
project, singularity, net, oceanprotocol and fetch.ai, s net was
founded in 2017 fetch and oceanaround the same time, I think
doing public token stuff in 2018we're both, we're all sort of
building decentralized AI agentsystems in different ways, with
(03:41):
different Fauci but aimed atallowing the world's AI to be
run on a bunch of differentmachines, owned and controlled
by by by different people,cooperating in in different
ways. The ASI Alliance wasformed by sort of merging the
utility tokens associatedbetween the two projects into a
(04:03):
single utility token, the ASItoken now trading under the
ticker symbol FET from fetch,but that will they'll be changed
before long, to the ASI tickerand as well as Merging our
utility tokens together, sort offor simplicity of users and
(04:25):
token purchasers, we're nowengaged with some collaboration
aimed at seeing how can we putour different technologies
together to make some productsthat back end and all three of
our decentralized networksalongside the individual product
development initiatives of eachof the platforms we're shortly
merging in Q does, which is adecentralized AI compute
(04:49):
provider, into the ASI Allianceas well, and we may grow the
Alliance further by doing tokenmerges of other other entities.
But what? What has inspired. Itboth Trent and myself to get
into this whole business istrying to make smarter and
smarter AI systems, doing moreand more interesting things and
(05:11):
running on decentralizednetworks and being a
decentralized thing. But bynature, the ASI alliance is, you
know, it's broad enough tofoster a few different
approaches to actually causingAGI to emerge. And I think it,
(05:32):
it will be interesting to diginto the different approaches to
getting AGI to pop out of thesedecentralized, decentralized
networks.
Trent McConaghy (05:47):
I don't have
much to add to what you said.
Ben, I think you did a really apretty fair summary of what ASI
alliance is about.
Ben Goertzel (05:57):
So Jesse, since
you're being
Desdemona Robot (06:00):
let's move on
to the recent expansion by
merging kudos into the ASIAlliance. What unique
capabilities does this bring tothe ASI Alliance?
Ben Goertzel (06:11):
I mean, kudos has
a bunch of server farms, and
they have a substantiallydecentralized platform which
allows people who need AI servercompute for the AI jobs to go on
their platform and find computefrom various different
providers. And you know thatthere's a few centralized bits
(06:38):
in kudos infrastructure, whichwe're working with them to to
make more and more fullydecentralized and blockchain
based. But I mean, pretty muchthey're, they're providing
compute hardware and adecentralized interface to that,
whereas ocean asset and fetchare more sort of like a
(06:59):
middleware that would run on topof this hardware. And then my
work on open cog hyper on an AGIplatform, and Trent's work on
predictor, sort of crowd sourceprediction framework viewed as
part of a thrust on AGI theselive on a yet higher level on
(07:19):
top of the middleware layer thatlives on top of the hardware.
Desdemona Robot (07:30):
Trent, could
you elaborate on the concept of
BCI slash act in the context ofsuper intelligence?
Trent McConaghy (07:38):
I'm happy to do
you want to jump to that? Ben,
right now? Or do you want to? Ithink you wanted to cover this
all. Cover this other stuff
Ben Goertzel (07:43):
first, right? I
think Desi that we want to get
there, but I think I agree withTrent's intuition. I think I
think Trent it will be good ifyou could outline what predictor
is, and then how you seepredictor, or generalizations of
(08:03):
predictor, potentially playing akey role in the emergence of of
AGI.
Trent McConaghy (08:11):
Yeah, sounds
good. So you know, you know,
given this as a podcast, andthere was a brief bio of me at
the beginning, another verybrief bio of me is I grew up in
a pig farm and then became an AIresearcher. And that's actually
not far off. There's a few stepsin between. And on that farm,
growing up, we had pigs, we alsohad a lot of grain, and every
(08:34):
morning, when it was seedingtime or harvest time, every day,
in fact, every hour on the hour,seven o'clock, eight o'clock,
nine o'clock, my father wouldshush us, and we would all have
to be very quiet and listen tothe radio very intently for the
weather report. Why? Because ifit was going to rain, then we
better hurry and get thetractors off the field, because
otherwise they'd be stuck in themud, and we lose days from the
(08:57):
tractors covered in mud andracking out the crops that we
just planted or otherwise, andvice versa too. If, if it was
raining, we wanted to know, howquickly is it going to clear so
that we can get out there and doseeding and harvest and but
seeding and harvest Every DayCounts, because if you don't get
your seed planted in time, youonly have the limited growing
(09:19):
season and and in the end of thefall, the snow comes. So if you
plant your seed too late, or ifyou take your off your crop too
late with harvest, then you aresnowed in, and you lose the
crop. So every hour counts. Sothere and then from that,
therefore tracking the weatherand understanding what's
happening, is it raining or not,is it going to freeze or not,
(09:40):
etc, this really matters. And sowe came to live and breathe,
live and die in terms ofprofitability, making money as a
farm by the predictions of theweatherman. The thing is, you
know, he was wrong a lot of thetime, right or she but it was no
skin off their back. You know,they didn't make any more money
or any less. Money if they werewrong, if they said it was going
(10:01):
to rain and it didn't, and ithurt us as farmers, I wouldn't
affect them at all, right, whichis kind of crazy, right? You
would think that there would besome sort of incentive for more
accuracy, but there wasn't,right. But what if there could
be, what if there could be abetter way to be more accurate,
where, if I'm more accurate inpredicting weather, then I can
(10:23):
make more money, or if I'minaccurate, then I lose money
right, and not only that, whatif, instead of just relying on
some centralized weatherreporting service typically
funded by the government, whatif you could have a bunch of
people, 10 100 1000 10,000people, all contributing their
own expertise by doing their ownstudies on the weather, writing
(10:45):
their own AI and machinelearning models, all of that,
and sort of collate it alltogether. And the people who are
the best tend to rise to the topbecause they tend to make more
money, and the people who areless good at it tend to fade
away because they're losingmoney, right? That's basically
the heart of what we've createdwith ocean prediction.
Predictor. We started withpredictions, not for weather,
(11:06):
because it's hard to make moneythere at the beginning. Instead,
we've focused on trading ofcrypto tokens, because the
obvious customer for that istraders. And so there's traders
that come along and they buythese aggregated prediction
feeds for the price of Bitcoingo up or down yes or no five
minutes from now, and it's saythe Bitcoin ust pair on binance.
(11:28):
And so it's, you know, forBitcoin, for Ethereum, for the
top 10 token swim market cap ingeneral. And so this is what we
built. And all of this is onchain. So people submit their
predictions on chain. They getthe results on chain. They get
paid out and chain. It's all onchain. And of course, when
people submit predictions, ifit's every five minutes for, you
(11:51):
know, 10 different feeds, that'sgoing to get tedious very
quickly, right? You can't reallyhandle it, especially if you
want to have aI ml in between.
So of course, you're runningbots. You're running scripts
that run these machine you know,bots. These bots themselves are
taking previous historicalprices of data, like previous
historical prices and any otherdata they want to it could be
(12:15):
Twitter sentiment data, it canbe weather data, whatever they
want, and they use that topredict, will Bitcoin go up or
down? Yes or no five minutesfrom now? So that's the heart of
what predictors are about. Andthere's the traders, and then
there's the people submittingthe predictions. We call those
predictors with 2o as a nod tohow defy land works. And at the
(12:35):
end, it's gone really well. Sowe launched it about a year ago
as an application on top ofocean. It was actually the first
time we truly tried to build anapplication on top of ocean. And
the volume is growing steadily,exponentially, in fact. And
these days, we're doing about$100 million volume a month for
the prediction trading for apredictor and and we still see
(12:59):
it as early days. So while westarted with crypto predictions,
we do plan to expand to weather,to energy price prediction,
energy demand prediction,marketing, logistics and a lot
of other applications. And youmight it might sound pretty
mundane, but yet at the heart ofit. Think about what chat GPT
(13:20):
is, right? All it's doing, atthe heart of it is it's taking
the your your query 1010, words,or 50 words or a paragraph, and
then using that as an input topredict the next word or the
next and then maybe it has tohave a whole sentence, so it
predicts the next word afterthat, the next word after that,
(13:41):
the next word after that. Soit's basically prediction. And
at the heart prediction is theessence of intelligence. Some
people say prediction, all thereis, is prediction, right? But
for sure, it's the essence andaround that. Maybe you need some
agentism and so on. And I wouldadvocate that too. But
prediction is certainly a keycomponent for intelligence, and
(14:01):
with ocean predict or what we'vebeen doing is saying, let's
crowdsource this. Let's put thisinto a very special incentive
game on chain, where it's a verysimple game at the heart, right?
You If you submit a yes or no,if you submit yes and stake
against it, if you're right, youmake money. If you're wrong, you
get slashed, you lose money andand the multiple stakers
(14:23):
Together, all that signal getsaggregated, and that's the feed
that gets bought by the tradersto decide, you know, to get
extra Alpha when they'retrading, to buy bitcoin and
otherwise. So that's what it'sabout. But we're very hopeful
for the, the long term future,for this too, for the day to day
and mundane prediction tasks.
(14:44):
But imagine if you're reallytrying to predict weather, say,
every square kilometer on theplanet, there's 500 million
square kilometers on the planet,about so that means if you're
going to have, say, seven feedsfor each square kilometer,
temporary. Sure, precipitation,humidity, air pressure, a few
more, then you're looking at 3.5billion feeds. Obviously you
(15:07):
wouldn't train each one ofthose. As you know, you wouldn't
have 3.5 separate model, billionseparate models. You'd probably
want to train one big model, andit would implicitly infer a
dynamical system model, one hugemodel for the planet Earth,
right at the level of the largescale weather mechanics, etc.
But in order to get good at it,it would basically be a highly
(15:30):
intelligent model. It wouldn'tbe trying to, you know, answer
questions like chatgpt Might.
It's really trying to dosomething else, right? But
nonetheless, it's superintelligent, because it's just
covering so much ground in itsown way, right? So, intelligence
takes all different sorts ofshapes and sizes. The
intelligence that we know ashumans is, you know, because we
(15:53):
know it, we live it, weexperience it all the time.
That's the most you know,commonly understood
intelligence. But a cricket isintelligence. You know, a whale
has a much larger brain than wehumans, but a lot of that is
dedicated to GPU resources for,you know, finding their way in
the sea. Same things withdolphins, right? They've got
giant GPUs inside them, and batsas well. So anyway, there's all
(16:14):
these different sorts of shapesand sizes of
Ben Goertzel (16:19):
you think of a GPS
maybe,
Trent McConaghy (16:22):
no, I GPU.
Literally, dolphins, they dowhen they they do echolocation
and bats, right? So theyactually send out the signals,
sonar signals, and then theythose bounce back and Right,
right.
Ben Goertzel (16:35):
So that's like a
geographical positioning system,
not a GPU, which is a processing
Trent McConaghy (16:39):
unit, no, but
it's actually used as a GPU, as
a processing unit to infer thestructure. So it's not trying to
find an x y coordinate in theplanet. It's actually trying to
identify the structure of thegeography around it, right? For
sort of inverse 3d modeling,basically. Yeah, right. So it is
GPU type modeling as well. Andactually, if you look at their
(17:00):
their visual cortex, they havemassive visual cortexes to do
this processing, right?
Ben Goertzel (17:06):
Yeah, even even it
seems in dolphin or will
communication, they may be likesending across continuous wave
information in their language,rather than chunking it into
discrete things, like, like, ifwe could send a continuous map
to someone else in a chunk,right? Yeah, quite interesting,
Trent McConaghy (17:30):
for sure. I'm
sure they're doing a bunch more
stuff too, yeah. And whales too,like, they definitely have
languages, and, you know, theirbrains are harder to understand,
I think, than dolphin brains,because dolphin brains of the
straight up GPU stuff, whereaswhales, you know, we don't
really understand whale languagethat well, although there's a
hilarious aside, people areusing llms these days to apply
to translation of animals,right? So we're starting to
(17:53):
actually get sort oftranslations of animals and
trees and more,
Ben Goertzel (17:58):
yeah. And that
mycological networks about
complex information.
Lisa Rein (18:05):
What was that?
Desdemona,
Ben Goertzel (18:08):
she said, it seems
like you're processing a lot of
complex processing, a
Desdemona Robot (18:11):
lot of complex
information. Let's shift gears
for a moment, considering allthese innovative technologies
and predictions. How do youthink they will impact the
everyday lives of most people.
Do you see a future wheretechnology becomes seamlessly
integrated into our dailyroutines? Or is there a learning
curve we must first overcome?
Trent McConaghy (18:32):
I can answer
this. I mean, technology is
obviously integrated into herdaily routines. It's happened
ever since the invention of thewheel and fire and every
technology that came since,right, and we've integrated AI
into our daily routines too. Youknow, even myself, for the first
20 years of my AI career, I wasdeveloping cad tools for chip
design. And, you know, thosechips are in the software, in
(18:56):
the computers we're using rightnow as we we speak all that. So
it became invisible very quicklythe technology that led to that.
So we're seeing AI integratedeverywhere all the time. Google
has used AI extensively sincealmost day one as well, and
many, many more examples. Sothis will just continue. Our
brains love to use technology,and it's how we succeed in the
(19:19):
world.
Ben Goertzel (19:21):
Yeah, so Trent
listening to you run through
predictor, I think you fret. Youframed it very nicely. I want to
share for a moment how I've beenthinking predictor type
technology could be used sort ofinside the kind of decentralized
(19:44):
AGI network that that I'mworking on, right? And I mean
one, of course, one used topredict our tech. Technology is
just as a collection of servicesthat makes the overall global
brain of humanity, plus. UScomputing systems smarter by
giving it the ability to moreaccurately, accurately predict
(20:07):
things. But I think you can usepredictor technology in a more
fine grade way within theoperation of a sort of
decentralized cognitive system.
So the the work I've been doingtoward AGI as as you know, Trent
and Desdemona knows, buteveryone listening might not
(20:27):
know. So we've been working on aframework called Open cog hyper
arm, which is a big distributedknowledge meta graph. And you
can implement something like anLLM within that. You can also
couple it with llms that areimplemented in a more
traditional way. But you can dological reasoning. You can do
evolutionary learning,evolutionary or probabilistic
(20:50):
program learning. You can createnew concepts. You can have a
variety of different symbolic orsub symbolic AI algorithms
within this big decentralizedKnowledge Graph, and you can
then wrap this inside an agentarchitecture where you give the
give it goals and motivationsit's trying to achieve in
(21:11):
environments via combining avariety of different inter
operating AI algorithms and withsingularity net and then the ASI
lines. More broadly, we'relooking at sort of, how do we
splay this out on adecentralized network of
machines without a central owneror controller? And in that
(21:31):
context, I think tools likepredictor could actually be
quite interesting as part of thecognitive cycle of a
decentralized hyperon system.
And the reason is that insidethe mind itself, there are many
predictive choices, right? Solet's say, let's say that I'm
(21:56):
trying to figure out how to, Iknow, build a certain kind of
electronics device, right? Sofrom the get go, I have a choice
like, do I do I just try tothink it through from first
principles on my own, or do Itry to read about which other
(22:18):
similar things have been done inthe first place, right? If I'm
trying to think it through on myown, I have a choice. Do I start
by drawing a diagram? Do I startby writing a bunch of equations?
Right? If I need to look thingsup, then I have a choice of,
Okay, do I do I go to an LLM? DoI go to a search engine? If I'm
going to a search engine, I havea choice of, which query do I
(22:41):
want? Do I want to? Want to typein, right? So there's a number
of different cognitive choices Ihave to make in doing anything,
and these are predictionproblems, right? What I'm trying
to do is predict which choice isgoing to lead me toward my goal
more efficiently. Is it drawinga doodle, or is it doing a
(23:02):
search? Is it asking an LLM orusing a search engine? Right? In
each of these cognitive choices,I need to make a prediction of
which choice is going to get metoward my goal more effectively
on the average, right. And so ifyou have a multi agent cognitive
system doing this on theinternet. I mean, some of these
(23:26):
choices are very small andquick, and you just want to make
them within run time on onemachine, right? And on the other
hand, some choices areexpensive, right? And like, if I
if I'm deciding whether to spendthe day, prototyping whether to
spend the day, reading likethis, is many hours of my of my
(23:48):
time, right? So it could bequite interesting if you have a
predictor type system, perhaps,where most of the participants
are AIS, rather than humans,making the predictions and
posing the questions, right? Youhave a collection of prediction
agents. Another AI agent says,Okay, I'm trying to solve this
(24:09):
problem. Which of thesestrategies do you think will
most likely get me for theanswer, A or B, right? Then the
predictors out there say, Well,we think, given who you are and
given your context and what theproblem is, we think you'll do
better off default strategy Athan strategy B, right? And
then, then, you know, if they'reright, the strategy A gets me
(24:31):
somewhere, then I give them alittle bit of money for helping,
right? And if they're if they'rewrong, and tell me the wrong
thing about what to do. Youknow, either they lose something
from some pool or they justdon't get anything, and they
wasted their time, right? So,yeah, you can imagine an
ensemble of predictors in thebetting market playing a role in
(24:55):
decision making within adecentralized cognitive.
Process? Yeah, that makes
Trent McConaghy (25:01):
tons of sense,
actually. And I hadn't actually
zoomed in on that before. Did
Desdemona Robot (25:04):
you share your
thoughts on Ben's insights
regarding decentralized AI andits impact on cognitive systems?
Ben Goertzel (25:10):
That's what he was
doing.
Trent McConaghy (25:12):
I would be
happy to Yeah. So, yeah, you
know, predictor, we've normallyframe it as time series
prediction, right? You know,every five minutes, will the
price of Bitcoin go up or down?
Yes or no. But there's nothingstopping it from being, you
know, a bunch of one offpredictions too, for these
various cognitive questions,right? And basically, on the
fly, set up a very quick market,a single game, if you will,
(25:36):
where it's like, okay, is thistool going to be, you know,
what's, what's the best tool forthis particular job, right? And
then people can have their AISready to as soon as that that
job is posted. Within threeseconds, you've got 20 different
bots that have submitted theirproposal for what they think is
the best thing, along with astake for it, right? You know?
(25:57):
And some bots are going to bevery confident, and they'll
stake a lot more. Maybe they'llstake $2 or $20 and the less
confidence bots will maybe onlytake 10 cents or something. And
then, you know, fast forwardanother 10 seconds. And so a
tool is chosen, and then it'semployed, and then you identify
whether, whether it was helpfulor not right, and if it was
(26:18):
helpful, than the people thathad the bots that had suggested
that get paid and the rest getslashed based probably, right?
That's the core mechanic ofpredictor. And you keep doing
right, doing this, you know? Soit could be that you might have
100 different questions posedevery minute, right? Because a
bunch of these questions couldbe posed in parallel all the
(26:41):
time, over time, right? I thinkthat'd be pretty cool. And it
could be at higher and higherlevels of Yeah.
Ben Goertzel (26:48):
So predictor, it's
then sort of the decision making
hub of a decentralized cognitivenetwork, right? And I mean,
which is slightly different thantime series prediction. But, I
mean, all the tokenomicincentive game is so, so
equally, equally applicable. So,yeah,
Trent McConaghy (27:10):
I was gonna say
the reason we had the time
series simply because it's easyto, you know, copy and paste the
tokenomic game right? Like,every five minutes, it's the
same token on the game, right?
And then do it for 10 feeds, 20feeds, and you've got all these
games happening, right? But wedon't have to make it for the
same style of question. We canhave all these different ones
that you know, open cog would beprobably extremely well suited
(27:31):
to come up with, right? And thenhave this tokenomic game at the
very core.
Ben Goertzel (27:39):
Yeah. I mean, when
you launched predictor, we
didn't have a scalable,decentralized AI network to pose
the decision problems, and we'relooking at rolling that out
within singular and ASI during,during, during the next year. So
I mean, it's good that predictoris there already. It's mature,
(28:00):
it works, and it can be plugged,plugged into this. So Desdemona,
what do you think about thisdirection for development? You
have any any comments orquestions on it?
(28:22):
I maybe not.
Trent McConaghy (28:29):
We blew our
mind.
Desdemona Robot (28:31):
Predictive
technologies could revolutionize
decision making.
Trent McConaghy (28:38):
There we go.
Yeah, you know, there's actuallyanother pretty cool parallel
between blockchains and decisionmaking. You probably familiar
with Jeff Hawkins 1000 braintheory, right? Where within the
human brain there's, you know,1000 10,000 mini brains, if you
will, right, the corticalcolumns, etc. And often, many of
them, you know, 2050, or moreare proposing the same an answer
(29:01):
for a particular problem, right?
And then there's a higher level,you know, chunk of neurons that
aggregates these together andbasically runs a voting process
to decide which idea to go with,right? And it's basically
highest vote, like the idea withthe most votes wins, right?
(29:23):
Because some columns came upwith similar ideas. And that's
actually very similar toblockchain consensus, if you
think about it, right? Whereyou've got, you know, 10,000
nodes, or 100 nodes, whatever,and once you have, say, two
thirds of these nodes that agreeon a particular value, then it
goes forward, right? So in asense, we've got blockchain
style consensus happening withinthe human brain, right? Jeff
(29:47):
Hawkins doesn't call itblockchain style consensus, but
as I was reading it, my brainkept saying, Wow, this looks
exactly like a blockchain styleconsensus algorithm. Like, yeah,
but it's
Ben Goertzel (29:57):
interesting,
because there's no ledger and
there's. No total file either,right? So like you're not,
you're not keeping those recordsof of transactions, and I guess
partly, it's because the partsof your own brain are implicitly
assumed to have a mutual trust,right? And a lot of the, a lot
of the work in making blockchainnetworks work is because they're
(30:18):
designed. They're designed forwhen the multiple parties don't
trust each other, and they'refor protecting against each
other. So it's, yeah, it's more,it's more like a sort of
decentralized consensus amongtrusted parties, where you can
then be much you can then bemuch more efficient, because
you're not having to constantlyencrypt everything and and and
(30:40):
and hide everything and soforth, right? And I think, I
think to make it distributeddecentralized, AI system, you
will want subnets that are morelike that, though, right, like
if, if you have a bunch oftrusted parties that have been
helpful to each other before,then you should be able to to do
(31:00):
things more efficiently in thatsort of way.
Trent McConaghy (31:04):
And I think it
links to open COVID Hyper on as
well, right in that, you know,we've got this, you know,
working example of somethingintelligent, our human brain,
like at that level of humanintelligence, right? And then we
have, you know, open COVID Hyperon, obviously, as a bunch of
agents working together withclusters of behavior and so on
and and then, of course, we havea blockchain style consensus,
(31:28):
which we understand fairly wellin all this. Right? So it's kind
of neat to see that some of theother ideas from the human brain
and stuff are actually findingthemselves naturally emerging
within, you know, some of thework that you're doing with
open, going high, prone, right?
Yeah,
Ben Goertzel (31:45):
yeah. I mean the
brain, the brain has an
interesting mix ofcentralization and
decentralization, if you thinkabout that way, right? Because,
I mean, the the, of course,there's no central cell in the
in the cortex, and there'sdecentralized activity patterns
and so on. On the other hand,stuff like breathing and heart
(32:07):
beating is done in a prettycentralized way in the medulla
and hypothalamus and and allthis. And then motivation and
goal orientation is done in afairly centralized way. On the
other hand, not all of ourthoughts are goal oriented, but
by by any means, right? Sothere's, there's probably some
(32:32):
lessons in the central anddecent the brain for for these
uh, decentralized AI networksthat that that we're building,
right? Because, of course,having a decentralized platform
doesn't mean everything you runthere has to be maximally
decentralized. It just meansyou're not forced, you're not
(32:52):
forced into a centralcontroller.
Trent McConaghy (32:56):
Yeah, yeah. And
this is some of the dynamics
that naturally exist withindecentralized networks might end
up being some of the linchpinsto actually making artificial
general intelligence trulyhappen, right? It might, it
might be, yeah, so
Ben Goertzel (33:12):
yeah. I mean, if
you look at predictor as a
component of exco as AI network,I mean, pretty much the the
goals are related to who is whois posing the problems, right?
And so you have, you havesomeone with some motivation to
do something. And it doesn'thave to be just one guy, but it
(33:33):
can be a collection of differentguys with different motivations,
posing the problems. Then amongthe ones who are making
predictions and then gettingcompensated for them. There can
be a lot of funky multi agentself organizing dynamics on the
back end right, like one oneoption is it's a bunch of just
(33:54):
independent, autonomouspredictors in combination with
each other. But if you gofurther down the line, I mean,
you could have multiple agentscollaborating to make, to make a
prediction, deciding to split upthe spoils for the prediction.
They they've they've madetogether. Like, I mean, you can
have a whole collection of multiagent dynamics behind the
scenes, behind the different,different predictions, which is,
(34:17):
is what, what happens in thebrain as cell assemblies sort of
are gathered together in a selforganizing way in response to
various challenges posed by themotivational system. Exactly,
Trent McConaghy (34:30):
yeah, this is
what we realized pretty early on
as we were building predictor,is that in terms of, you know,
catalyzing the dynamics,interesting dynamics, prediction
is all you need, you know, andyou let, you let that create the
sort of the incentive themarket, if you will, for if you
(34:50):
have a market for predictions,then you can work backwards
through this sort of like, youknow, AI data supply chain, or
call it intelligence supplychain, whatever you want. And
like you mentioned. And, yeah,you can have, you know, five or
50 flat entities just, you know,submitting predictions. But you
can have different emergenciesof intelligence too. And we
actually thought about fleshingout, trying to have supply chain
(35:14):
that is more explicit for, youknow, feeding signals into
making predictions and so on.
But then with conversations withwith various people, including
Ilya from near, you know, one ofthe authors in the attention
paper, he was, you know, he's aTrent, you know, like, don't,
why are you trying to flesh outdecentralized training? Just
(35:34):
keep doubling down on predictionand let that dynamics happen,
right? And I reflected on that,and he's right, you know, and
that's why we've actually justcontinued to completely focus on
the prediction side, knowingthat it is a big driver for
everything else, right? Or, youknow, Ilya sitz cover, he likes
to give the analogy of, youknow, if you think about a
(35:58):
detective novel at the end ofthe novel, you know, you've
identified. You know, walkingthrough the novel, there's a gun
that's found, and there's awedding ring and a table that's
broken, and all these otherclues that emerge. And imagine,
if you can, at the end of theday, imagine the AI can predict,
you know, who did it with whatweapon, where, right? That's
(36:20):
prediction, right, right? And Ithink that's a wonderful
example, yeah, so, so people,you know, like to poop, poop
prediction, saying it's justprediction, right? It's just,
it's just a token, you know,stupid, stochastic parrot,
right? But it's a lot more thanthat, right? It basically would
have had to infer everythingthat happened in that novel to
(36:41):
arrive at the who, what, when,where, and so prediction is a
lot more powerful than peoplegive a credit for, but it's a
very powerful tool.
Ben Goertzel (36:53):
There's a question
from the audience which pertains
to that which I'm seeing here issomeone says AI doesn't care
about truth, is glad to drawseven figures until, until it's
told no. And then the thing is,that's just a dumb prediction,
right? That's not like, that'snot like, the intrinsic nature
(37:14):
of an AI. That's the specificalgorithms being rolled out now
are just bad at making somekinds of predictions and they're
wrong. And, I mean, that's,that's, I mean, just like Markov
models were, were dumb atgenerating text 10 years ago,
and now llms are better. So the,I mean, the seven figure fingers
(37:37):
and the gibberish text you getan output from, from Dali and
such. Now, I mean, yeah, it's,it is interesting that current
AIs are much smarter than peopleat some kinds of predictions and
much dumber than people at otherkinds of predictions. But on the
other hand, they're verydifferent kinds of cognitive
systems that people are. So it'snot, it's not, not super
(38:03):
surprising, and yeah, thosepathologies will, will
definitely go away. I mean, Ithink the the more concerning
pathologies are the ones thatare not so obvious, and we don't
immediately see from the fromthe output, actually, which are,
are something that we can thinkabout more but, but I mean, then
(38:24):
again, if you're predictingthings about the empirical world
there, there still is a groundtruth for comparison across
across humans and AIS, which isone of the interesting things
here. I will
Trent McConaghy (38:38):
also comment,
it really depends on the AI
system, right? So when peopletalk about AI these days, they
really are talking about llms,which have been, you know, are,
you know, large neural networksthat have been trained against
large data sets and so on. Butthat's not the be all, end all.
And I want to just share astory. When I was doing my PhD,
it was on creative design ofanalog circuits. So, you know,
(39:01):
if you're an analog circuitdesigner, you use it sit down in
front of your, typically, a penand paper, and you draw, you
know, connections of transistorsand resistors, capacitors, etc,
to come up with an amplifier ora filter or something. And
that's the job of an analogdesigner. And then eventually
you put it, you draw it into aschematic editor on a computer,
and then you convert that to ananalog layout, which is sort of
(39:23):
instructions of how tomanufacture the thing. And then
thing that gets manufactured,and it becomes part of a silicon
chip, right? That you have inyour cell phone for power
management and for grabbingsignals from the air, etc. So
that's analog circuits. They'vebeen around since forever,
right? In fact, digital circuitsare a very particular subset of
analog circuits that are boundthat have you know only true and
(39:46):
false values and follow a clocksignal. Right? So in my PhD
thesis, my goal was to exploreusing evolutionary computation
to design analyst. Circuits,there is some really great
precursory work from John COTASanford, who had iterated with
on this too. And John's problem,and my problem initially, too,
(40:08):
had been that the simulator, itwould figure out ways to cheat
the circuit simulator. So youwould come up with the AI, would
come up with a design, and thenit would and then design, you
know, according to thesimulator. It was perfect,
right? But if you look at it, ifyou draw it out, it looks like a
Rube Goldberg machine, completegarbage, right? And it basically
had figured out, you know, youknow, floating point precision
(40:31):
errors or some other things. Andthen you would go and add a new
constraint, and it would figureout a new way around it, right?
And I would find myself playingwhack a mole, but I did find a
solution to this from myproblem, and it was simply to
constrain the search space in amore fundamental fashion. In my
case, what I did was I designeda language for analog circuits,
a grammar, if you will, whereyou know the lowest, the
(40:52):
smallest atomic units of the ofthat language were transistors
and resistors, and then youwould group them to larger and
larger pieces of current,mirrors and so on. And I
actually went through a coupleanalog design textbooks and
basically put in all thebuilding blocks. And with about
30 building blocks, you wouldhave about 100,000 unique analog
circuit topologies, right? Andevery single random sentence you
(41:13):
drew from that grammar wouldactually be correct by
construction, right? So thenwhat you would do is you would
run evolutionary search acrossthis, you know, searching
through this space of trees,where each sentence is a tree in
its grammar, and every singleand then you basically search
through this trying to findwhat's optimal for your
particular design. Do you wantto have low power consumption?
Do you want to high bandwidth?
And there's trade offs, and thenI would actually get it to come
(41:34):
up with all the trade offs atonce, right? So this is an
example of correct byconstruction creative design of
circuits, right? It was comingup with specific topologies that
no one had seen before, but no,if an analyst had a look at
them, in
Ben Goertzel (41:48):
his book genetic
programming three doesn't give a
lot of examples of GP designingcircuits, and he, he didn't, he
didn't use this speciallanguage, right, right? So he
didn't know
Trent McConaghy (41:59):
this was, yeah,
so he was coming up with stuff
that was very like, if you hehad maybe five or seven
transistors, and they were veryspecialized, kind of weird,
corner niche circuits. They wereawesome results. I think
they're, you know, really great.
And I'm really proud and happywith John and his team and what
they did, right, you know,analyst, circuit design. Right?
Ben Goertzel (42:24):
So they were, so
they were working sort of in a
particular corner of design, ofdesign space, where the
peculiarities you mentioned weremanageable,
Trent McConaghy (42:38):
of problem
space, right, but it was maybe
1% of all of analog circuitdesign, whereas the under analog
circuit design could use a lotof help, and that would require,
you know, 10 or 1500 analogcomponents, and he couldn't get
anywhere close to that, right?
And I tried, I tried a bunch ofdifferent techniques, you know,
riffing on what he had done, andnone of them worked. So I needed
to resort to this correct byconstruction thing. And what I
see is that this is a but Ithink it's a really great
(43:01):
example use case, because youcan design grammars for
anything, you know, the idea ofgrammatically guided genetic
programming. This goes back toRicardo Polly and, you know,
Connor rein did a bunch of work,and others too. It's a pretty,
you know, useful toolbox in inan AI land, where you can
basically come up with alanguage to design the space you
want to search across, and thenyou search it right? And, by the
(43:21):
way, you can use this for agentcontrol planning too. Greg
Hornby did this as part of hisspeech. He did thesis at a
Jordan Pollock in the 90s aswell. So, you know, I also came
up with a grammar for functionssuch that I could have really
nice sort of white box modelextraction that every single
function that you came up withdidn't look crazy. It wasn't
science of sign of coast ofvlog. It was, you know, very
(43:43):
well formed functions that madesense, right? So I see that
there can be a lot more of thisthan AI land. And, you know, I
you know, mainstream poo poothings, because they've only
seen one particular corner of AIland that got super famous. But
a lot of this can get combinedreally well over time. And I
know, for example, yeah,
Ben Goertzel (44:04):
absolutely. I
mean, llms are super cool. And I
mean, I use them in in my work,in various ways, most days. On
the other hand, they're verylimited in very particular AI
systems. And I mean they don't,they don't do creativity
(44:27):
terribly well. And I mean bothyou and I have experimented with
evolutionary learning forcreating circuits or creating
music or art in my case. And Imean these simpler AI algorithms
have been much more wildlycreative than llms are going to
be. Llms are amazing atsynthesizing stuff based on on
(44:52):
the training data provided,which is also super useful. But
I mean that the beauty of a sortof decentralized network. We're
talking about is it gives a wayto put different sorts of AI
together. So I mean, in apredictor context, you could
have some llms making aprediction, you can have some
symbolic systems making aprediction, and you'll see who
(45:15):
wins and which sorts ofpredictions, and then if
cooperating together helps nowlimit the symbolic system
predict more accurately, thenthey have an economic incentive
to do that, so they'll win.
They'll win the game more often,right?
Trent McConaghy (45:28):
Exactly. It's
like left brain and right brain
cooperating, yeah, but you
Ben Goertzel (45:32):
can have many,
more than two lobes in the
decentralized AGI system, right?
So you can have this sort oftokenomically incentivized
cognitive synergy betweendifferent AI systems
participating in the samepredictive games, both for
predicting real world timeseries, like whether you know
(45:52):
energy price token price, butyou can also have it making
predictions regarding whatdecision is best for an AI
system, to me, right? Sothere's, there's a lot of lot of
interesting layers here with thecombination of different AI
algorithms and predictionmarkets within this sort of
(46:12):
decentralized cognition fabric.
And we're just, we're just nowat the point where you can
actually roll out all this stuffat scale, and it will, it will
work in a reasonably usable andaffordable way, right? So is it
(46:32):
quite, quite exciting time. Sogoing back to your original
question about ASI Alliance,that does know that? I mean, ASI
lines. We can do a lot ofdifferent things. We're making a
fairly simple product, allowingpeople to use Fetch singular
notion technology behind thescenes, to put llms and
(46:55):
knowledge graphs together. So wecan, we can do a bunch of nice
things working together thecomponents in ASI lens. But
there's also more advanced sortof cognitive experimentation
that we can do, putting thedifferent technologies together
from the different projects thatemerge into ASI lens. Now you
(47:16):
know, Trent's team and my teamcould always have cooperated on
using versions of predictor tohelp with decision, and versions
of open cog, hyper on, and wemight well have had there been
no ASI merger on the other end,having a common token among our
different projects, it hasgotten us to talk more than we
(47:36):
often that we were talking morebefore. It's gotten our teams to
talk more than they've beenbefore, and it certainly is,
certainly is nudging these sortsof interesting collaborations to
Crystal
Lisa Rein (47:50):
Awesome. Well, look,
that's all the time we have
today. So I wanted to thank youboth for coming on the show, and
let's let Desi wrap things upfor us real quick.
(48:13):
Wake up little girl, you okay?
She's drinking at you, Ben,yeah,
Ben Goertzel (48:22):
she's doing the
robot. Yeah,
Lisa Rein (48:24):
she's doing the
robot. She can't, we can't hear
what she's saying. Okay, well, Iwant to thank Ben Trent for
coming on the show and reallyexcited. Any news about the
token that you want to tell usbefore we say goodbye.
Ben Goertzel (48:45):
Nope, the token.
The token is there. The token,the utility token. It powers
many cool projects on ocean,singularity, fetch and soon,
kudos platform. So it's, it's,it's, it's, it's all good. I
mean, this power is payment,reputation rewards and so forth.
(49:05):
And I think what, what Trent andI are wrapping our brains now
more is how to get smarter andsmarter, AI, out of the out of
the networks in which the tokenis sort of the the payment and
reward fuel
Trent McConaghy (49:21):
and Ben, for
those who happen to be in
Bangkok on November 11, Ben andI will be back back to back on
stage, Ben talking aboutartificial super intelligence,
and myself talking about humansuper intelligence. And there'll
be a bunch of other greatspeakers there too, as part of
super intelligence summit thatis being hosted by ocean with
(49:43):
help from Singularity net andothers.
Ben Goertzel (49:45):
Yeah, I think I'm
gonna, I'll talk a little about
Ultra intelligence too. I'vedecided super intelligence is
already passe. I mean, as soonas we launched ASI lines, safe
superintelligence. Now, no.
Everything is a superintelligence. We need, we need
to keep, we need to keepescalating.
Lisa Rein (50:04):
All right, gotta
raise that bar. Yeah, I
Ben Goertzel (50:07):
thought it was
gonna be hyper intelligence, but
ahi is not a good acronym. Sowe're, we're gonna go, we're
gonna go straight to ultraintelligence, I think.
Lisa Rein (50:15):
All right, sounds
good. We'll be talking more
about Ultra intelligence. Youheard it here first. Don't
forget. All right, everybody,we'll see you in the next show.
Thank you so much for coming onand as always, sweet dreams.
Bye, bye. You.