All Episodes

September 15, 2025 41 mins

Is AI going to go the same way as computing: from colossal LLMs owned by a few companies to billions of networked AI agents? How does that parallel one of the great underappreciated secrets of the human brain? Join this week with guest MIT Media Lab professor (and AI-decentralizer) Ramesh Raskar.

Mark as Played
Transcript

Episode Transcript

Available transcripts are automatically generated. Complete accuracy is not guaranteed.
Speaker 1 (00:05):
In the early days of computing, everyone thought about a big,
colossal mainframe computer that had all the data, and eventually
the model switched and what we ended up with is
billions of individual small computers, all networked and operating as
a collective. Might AI go the same way, where we

(00:27):
move from giant, large language models owned by a few
companies to lots and lots of individual AI agents that
are decentralized and operating together. And what does this have
to do with solving diseases? And why in the future
we'll have an army of agents solving problems for us
out there. And what to make of the fact that

(00:49):
we will have created a whole new species that runs
at a timescale a trillion times faster than we do.
Welcome to Inner Cosmos with.

Speaker 2 (01:02):
Me David Eagelman.

Speaker 1 (01:03):
I'm a neuroscientist and author at Stanford and in these
episodes we sail deeply into our three pound universe to
understand why our lives look the way they do. Today's

(01:30):
episode is about what happens when intelligence becomes decentralized. When
we talk about artificial intelligence, the conversation is typically about
one giant model, a mind blowingly massive data set and
a huge compute cluster, so that we get one singular
model that takes care of everything. But the first thing

(01:52):
to note is that's not how the brain works. As
I argued in my book Incognito, the brain is best
understood as a team of ribles. Inside your skull are
scores or hundreds of specialized modules. Some care about vision,
hearing touched, some about walking, some about processing faces. Others

(02:14):
track moving objects in the world. Other systems are working
to predict what you should do next. But the key
is that these systems in the brain are not like
pieces of a very efficient machine where each part serves
its purpose. Instead, every second of your life, these networks
find themselves in conflict. They argue, they interrupt, they compete

(02:36):
for dominance. The feeling of having a unified self is
an illusion stitched together on the fly. So maybe the
future of AI isn't one giant LLM to rule them all,
but something more like what we are. A distributed system,
a team of rival agents and argumentative network of local experts,

(03:01):
each with limited knowledge and a unique perspective, together making
something more powerful than any individual part. But how do
you build that? How do you preserve data? Privacy. When
intelligence is spread across devices and across hospitals and across
country borders. How do you incentivize data sharing? How do

(03:22):
you verify who's a good actor?

Speaker 2 (03:25):
These are the.

Speaker 1 (03:25):
Kinds of questions that animate my friend and colleague, Ramesh Roscar.
He's a professor at the MIT Media Lab. He's an inventor,
He's a systems thinker and a pioneer in this new
world of decentralized AI. His work spands from privacy preserving
technologies to public health tools, to new ways of imagining
what AI can be from a technical point of view,

(03:48):
but also ethically and socially. So today we're going to
talk about why the future of AI might lie in localized,
specialized systems that are trained on the fly, and how
new architectures might allow data to stay where it is
on your phone and your clinic and your city, while
still contributing to a global model. So this is a

(04:10):
conversation about architecture and ethics, competition and cooperation, and the
radically different future that becomes possible when we stop trying
to centralize everything and instead embrace the power of the network.
So let's think, as an example, the healthcare systems. So

(04:31):
the idea is you've got all this data locked up
in electronic health records all over the place. None of
the hospital systems want to share that data because they
don't want other people looking at it. And it's valuable data.
And so the idea of decentralized AI, for example, could
be that an AI is looking at all the different

(04:52):
electronic health records and all the different hospital systems, but
no human is looking at it. With millions of records,
it puts together there a big picture and discover sing
is that right?

Speaker 2 (05:02):
Right?

Speaker 3 (05:03):
I mean health is a great example because I would
argue that if somehow I have for every health condition
all the data, I can bring it in one place
and train models, I can solve it. The problem is,
you know, the capitalistic system doesn't allow us to do
it because of privacy, because of trade secrets, and as
I said, people hold on to the data thinking it's
worth something, but they don't get paid either. So if

(05:24):
there was a marketplace for data so that they're not
to share the data, but they can share the insights
from it and get paid in return, then everybody would
be incentivized to contribute to these insights without contributing raw data.

Speaker 1 (05:39):
So the idea is, let's say I'm a hospital system.
I'm not giving away the electronic health records. Instead, I'm
letting an AI poke its nose into these things, figure
out insights from the data, and leave. And that way,
I haven't given up something in particular, and I get
paid for that. By the way, that's right.

Speaker 3 (05:58):
I mean another way to think about this is again,
you want to share intelligence without sharing raw data, and
that's how the human society works.

Speaker 2 (06:06):
The analogy would be, if I had enough money, you know.

Speaker 3 (06:09):
I could hire you know, a thousand medical interns and
send them to If I want to solve partical health condition,
I would send them to every hospital, ask them to
do an internship for a year, learn about everything that
partical hospital does, about how they treat a potical condition,
and how their ideology pathology works, and then bring all ten,
all thousands of them in one office and extract their

(06:32):
intelligence and then solve any condition that's out there.

Speaker 2 (06:36):
And that would be a way to bring.

Speaker 3 (06:37):
Intelligence from all these thousand hospitals without bringing in your
raw data. So how do you now make that happen
not in the physical world, but in the digital world.
By effectively sending a MICROAI like an agent that behaves
like an intern that learns everything and brings back only
the intelligence and in return pace for the stay at
the hospital, and then you know, creates a global algorithm

(06:59):
for that.

Speaker 1 (07:00):
So let's take one second to distinguish decentralized AI from
agentic AI. They're related, but tells the difference.

Speaker 3 (07:07):
Yeah, I mean the agent DKA is a great way
to think about it, because mix it very simple to understand.
So the reason to use agent as an anology as
an anchor for decent lazed AI is because you imagine
an agent is like a medical intern, and the agent
can go to the hospital, you know, you know, stay

(07:28):
there for some time, learn everything about the data, pay
for anything that they have learned, and then come back.
And I can take this thousand agents and now I
can ask them any new question about that health condition.
And the beauty here is that I know that the
agent did not steal any data because I can send
an agent as a model. And model, as you know,

(07:48):
is you know, basically a big file let's say one
hundred megabyte. You a model, it goes in it looks
like gigabytes of data in the hospital, but comes back
again only as a hundred megabyte model.

Speaker 1 (08:00):
So that's you tweaked the parameters in the model exactly.
We're not bringing back the raw data, yeah, which is
what happens in the human brain. So if you think
about centralized A versus decentized AI, in centilis you have
this one massive model that's doing everything. Decentralized AI matches
very well with the concept of agent AI agent take
AI because the notion is that each of the agent

(08:21):
is not just massive six hundred billion parameter models that
you're hearing about from open AI and meta and so on,
and these agents are much smaller, and they're very specific
and they do specific tasks.

Speaker 2 (08:34):
So a legal.

Speaker 3 (08:35):
Agent may not notizing about the health agent or financial
agents and so on. They're doing very specific things, and
then they can get better over time, and they get
better in two ways. They get better because they're looking
at new data, but more importantly, they get better because
they're interacting with other agents. And that's what decentalization decential
plays in kind of mimics the human society because even

(08:56):
for us, you know, growing up, we learn from some
data and some experiences, but most of the things we
learn are actually through interactions.

Speaker 1 (09:05):
So are you saying decentralized AI in a gent ki
is the same thing, or a gent ki is a
good example of decentralized AI.

Speaker 3 (09:12):
Agent ki is is kind of the brain that you need,
but you need other restual ecosystem to emerge for decentralization. So,
for example, what does it mean to have for agents
to have commerce? How do agents pay for each other?
You know, how do I know that a data that
I you know, that data that I used from a hospital,
is it worth one hundred dollars, thousand dollars or a

(09:33):
million dollars? You al sort of figure out, you know,
how do you make sure that the agents are not misbehaving?
So beyond the intelligence of a given agent, which is
classic agent kai, we don't think about the rest of
the decentralization aspect of it. It's like saying, you know,
I might have ability to do transactions, but that's very
different than building a stock market.

Speaker 1 (09:54):
And so give us an example of what the world
will look like. Let's say in ten years from now,
when we've got a lot agents and we have this
whole infrastructure for agents to communicate and pay each other
and so on, what does that mean for all of us, it's.

Speaker 2 (10:06):
Difficult to product.

Speaker 3 (10:07):
But I would say in the short run, every one
of us will have our own agent. You know, every
organization will have its own agent. And these agents will
work on our behalf. So they're anchored our identity or
identity of the organization and they're working on our behalf
and they are you know, they're doing commerce on our behalf.
So if you think about kind of the roadmap of

(10:29):
how this agent tech world will evolve, in the beginning,
it's just about foundations, like do these agents have identity,
do they have authentication? Can they really represent us? You know,
what's the reputation when when my agent is using another agent?
You know, can it trust? You know, like you know
what ky C is, like I know your agent. All

(10:50):
the foundational stuff. That's what's happening now in the industry.
And also work at an IM. The next phase is
agient commerce. Just as if you move to a new city,
you know, you're figure out, you know what job you're
going to get. You know, are you going to get
paid fifteen dollars an hour or thousand dollars an hour?

Speaker 2 (11:06):
And so on?

Speaker 3 (11:07):
You know, you start learning, you start working over time,
your skills are not that great, so you go to
you know, a school, so there'll be in agient schools
or agents go to agent schools.

Speaker 2 (11:17):
Sometimes they need to be repaired. It will be agent repairs.

Speaker 3 (11:20):
Sometimes there will be a risk of using an agent,
so you'll have agent insurances, you know, and so on.

Speaker 2 (11:25):
So you have like a whole.

Speaker 3 (11:26):
Commerce and economy around agents. But then the third phases
very interesting, which is you have very emergent behaviors with agents.
Just as once you move to the city, you don't
just work and you know, think about your livelihood, but
you start creating social circles. You know, you have sports teams,
you know, you have universities, you have you know, justice

(11:47):
and court. So you get all this emergent behavior when
folks get together and just as agents get together. So
you can see this going from foundations to Asian commerce
to this you know, emergent behavior of agent societies. But
just assumes that the agents are anchored at human identity,
we can't really see beyond this event horizon of what
will happen when agents that are billions may be trillance

(12:09):
of agents are navigating on their own on the internet,
you know, meeting other agents and learning and socializing and
you know, transacting on the on by themselves. So it's
kind of diffrully imagine what will happen beyond beyond the
scenario where they're still representing and anchored for individual human being.

Speaker 1 (12:29):
And even when they are really anchored on us, it's
still very difficult to see what's going to happen because
they can run it trillion times faster.

Speaker 3 (12:37):
Than we can, and they can replicate, and they can
be you know, in multiple less at the same time,
and they can migrate. You know, they can behave one
way and we have another way in the next many
second and so on.

Speaker 1 (12:48):
So I have a question. So a lot of people,
of course, are worried about AI alignment in what happens
if things go bad, And in this case of decentralization,
with all these agents running around working at speeds that
are much faster than we can comprehend, that it does
seem like a bit of a dangerous territory to get into. Now,

(13:09):
you might say that centralized AI also has its dangers,
and that what we need maybe are checks and balances
with decentral ied AA. But what does that look like?
What would that mean to put checks and balances.

Speaker 3 (13:19):
And glad you've brought up alignment in centralized AI. The
problem is different when you think about decentralization and agent
KI because now we don't have to worry about you know,
some one evil character taking over the world, but agents
on their own could do bad things. So as long
as they've anchored at a human identity I think should

(13:40):
be in a reasonably good ship. It's almostly using like
two factor authentication, you know, like the agent cannot do
anything unless that says something on my phone, just using
today's anology, so that should be under control.

Speaker 2 (13:51):
But once we go.

Speaker 3 (13:52):
Beyond that and agents are sufficiently autonomous that can create
their own societies. Now the challenge is, you know, how
do you make showed that doing the right thing. So
the alignment becomes a very different problem. It becomes an
algorithmic orchestration problem. So how do you have algorithms that
orchestrate you know, the incentives of the carrots and sticks
for the agents to behave. And that's actually a very

(14:14):
complex problem. That's a lot for research at m T
as well, because you could imagine that you know, if
you if you try to use decentralization, like there were
decentil issues. For DeFi, in the crypto and web three world,
it didn't go very well where examples like Luna and
Terra compared to stable coins today, and they had, you know,

(14:36):
an orchestration to figure out, how do we maintain you know,
their one dollar anchor.

Speaker 1 (14:41):
So anyway, I think that ended up crashing.

Speaker 3 (14:43):
And they ended up crashing because they were based on
the behavioral economics that was predicted by a bunch of
geeks sitting in a garage, and that didn't work out.
So I think we should not rely on algorithmic orchestration
of society of humans or society of agents.

Speaker 2 (14:57):
And I think there are very important problems to solve.

Speaker 1 (14:59):
So what is the thing beyond algorithmic orchestration?

Speaker 3 (15:02):
So you need to slow things down, You need to
have some kind of a governance principle that's out there,
and you know, that's how our society works. All the
banks and courtes could run you know, algorithmically in many cases,
we still have human in the loop and we slow
things down and we trade inefficiency for you know, some
control over the system.

Speaker 2 (15:23):
So I think that becomes very critical.

Speaker 1 (15:25):
So what does this mean for biology research? For example?

Speaker 3 (15:28):
Yeah, I think scientific research is going to depend on
some kind of a decentialization because very unlikely that you know,
all the data will come in plut complace, and you know,
all the scientists will just pick up the phone and
start talking to each other. They're all you know, we're
all scientists, and we like cooperation, but also like competition,
and you know, the human motivation to compete is what actually.

Speaker 2 (15:48):
Gets gets us going.

Speaker 3 (15:50):
So there will still be reviews, a competition to get
your paper published in top journals, but we need a
way for them to share their insights at the same time,
and for those insights they get something in return, so
it's crowdsourced, but something in return. I like to use
analogy of Google Maps and traffic on Google Maps. You know,
before that removes, there was Garment Maps, which we barely used.

Speaker 2 (16:13):
And today, even if.

Speaker 3 (16:14):
I know how to go from point to point B,
you know, I open Google Maps not because I want
to know the route, but I want to know where
everybody else is. So I'm benefiting from the fact that
everybody else is contributing the GPS coordinate and I get
to see, you know, the reds and the greens on
the map and turn, but turn navigation, but I'm also
giving up my own GPS.

Speaker 2 (16:34):
Location while I'm driving.

Speaker 3 (16:35):
I'm contributing to the system and I'm getting great benefits
in return. So can we create the equivalent of Google
Maps for any societal problem, you know, whether it's scientific
discovery or finding, you know, a treatment plan. Again, if
you can centralize all the data, we can solve these problems.
We cannot centralize it, and so we must use a
decentralized method.

Speaker 1 (16:55):
And let me just make sure you have that. Why
can't we centralize it? Isn't that what open ai and
meta and everyone else is doing.

Speaker 3 (17:00):
So, I mean, if you look at open ai and
everybody else, they you know, they scanned the whole Internet,
you know, kind of in a permission less way. And
that's why you could train this multi tens of trill
and token models that are out there. But as I said,
that's like just the tip of the iceberg. Most of
the information in the world is actually locked away. If
it takes health as an example, it locked away in hospitals,

(17:22):
in pharmaceutical companies, in all the failed trials, you know,
and all the research work in the lab notebooks of
you know, all the amazing postdocs and students.

Speaker 2 (17:32):
So all the information is actually not available.

Speaker 3 (17:35):
So think about if all the information was available to
open AI, it would do an amazing thing. It would
solve healthcare overnight. But it cannot because of the competition.

Speaker 1 (18:00):
You know, I'm struck with it. An analogy here. I
saw an interview with Isaac asimof in nineteen eighty eight,
I think it was he was on jonah Lerror's show,
and he said, look, I foresee a day when there's
a central mainframe that has all of human kind's knowledge
and everyone has a cable to their house and they
can access the knowledge of the world that way. And

(18:21):
he was almost right. But what happened instead was instead
of a centralized knowledge repository, we it became completely decentralized
on the Internet.

Speaker 3 (18:30):
Yes, I'm glad you mention that, because you know, today
we're kind of the mainframe era of AI, and from
that we're going to switch to the PC era of AI,
to the Internet era of AI. So, as you know,
in the eighties, big companies like IBM and Silicon Graphics
were selled as big mainframes and we would literally connect
to these machines or in a dial up motems, you know, yeah,

(18:55):
and even to use a calculator, by the way, we
had to use a dialup motem. It was not available
on the machines. But in the late eighties and nineties
we said, hey, that's kind of stupid. I should just
have that calculator or a word or whatever the software
using on my machine on a home machine. And the
fact that we moved from a mainframe era a PCA
I was interesting, but we needed one more thing. Before that,

(19:16):
there was Internet, which was mostly connecting machines, and Tim
Bernersley and others said in the nineties that actually what
we need is a world wide web, and he brought
in four things, a browser, protocols like SGTP and STML,
and a notion of URL.

Speaker 2 (19:32):
How do you discover something right?

Speaker 3 (19:33):
And with those four things he transformed the Internet into
world wide Web so that humans can interact with all
the stuff that's out there. And one would argue that
the impact of the mainframe revolution and PC revolution is
insignificant compared to the impact of the world wide Web,

(19:54):
and the same thing is true today. The impact of
AI will be insignificant to the impact of the Internet
and the web of AI that's or.

Speaker 1 (20:04):
There terrific, and so tell us about Project Nanda.

Speaker 3 (20:10):
So this decentralization agent tech concepts very nicely map us
to this Project Nanda. Nanda stands for networked AI agents
in decentralized architecture, but it's also the name of my sister,
and Nanda means joy in Sanskrit.

Speaker 2 (20:28):
So I think, let's bring some joy to.

Speaker 3 (20:30):
The technology and the Internet and the web of AA agents.

Speaker 1 (20:34):
What is project going to do?

Speaker 3 (20:36):
So Project Nanda is a projecda is A is a
large collective. We're looking at this three phases, how how
the agent taic world is going to evolve, you know
the foundations agent e commerce and you know agentic societies,
uh and involves multiple universities, multiple companies, and we're working
together to figure out how this you know, web of

(20:59):
agents willieve the same way Tim berners Lee launched world
Ward Web Constium.

Speaker 2 (21:04):
Alside of my Tea to see how we can use.

Speaker 3 (21:07):
The Internet technologies in a responsible way and keep the
flock together. And so one should thank Tim berners Lee
for keeping it an open, vibrant, you know, and neutral
web that's out there, as opposed to what we saw
later on in the computing you know generation, which is
things like social media or things like mobile phones are

(21:29):
highly fragmented systems and a few companies control you know,
these ecosystems. As opposed to the World War Web. Today
you can go online, go to go daday dot com,
create a website, launch a business, create new innovations, do
whatever you want. And that permissionless nature of the world
ward Web has really created in an unlocked you know,

(21:49):
not just you know treellanceer dollars economic value, but I
would say scientific progress, you know, new democracies, new societies.
I mean, it has really changed the way we think
about it. We also think about what would happen one
in the absence of project Now, it's possible that, you know,
there will be only two Asient stores, just like we
have app store and play store. That just two Asian
store and anything you create in a and agents, every

(22:11):
one of us will have to go through these two
platforms and they won't be talking to each other. And
those platforms say we use the tool, we will tell
you which AI to use, We will tell you what
you can do and cannot do. And by the way,
any money you make from that will take thirty percent
of it.

Speaker 1 (22:25):
So the analogy here is like Apple Store and Google
Play exactly.

Speaker 3 (22:28):
Yeah, and imagine if the World Wide Web wasn't open
and vibrant. Imagine there are only two websites you can
use to create an online business. One caught Squarespace and
we's call Wix, and there are only two companies, and
any website you create must be created those one of
those two platforms. You know, they will give you all
the tools, you know, they will do all the transactions

(22:49):
for you, They will disclose which websites are better than
others are not. And by the way, any money you
make on the web, you have to share thirty percent
with square Space or Wix. That would happen a pretty
dystopian world. Chance that the agentic Web could also end
up in this highly fragmented, highly dystopian world. And the
goal for NANDA is to kind of make sure we
have the freedom, we have the agency, and we have

(23:11):
ability to do this in an open and vibrate and
neutral way. So all the organizations that emerged in the
world Wide Web, such as world Wide Web Consortium, the DNS,
i CAN.

Speaker 2 (23:22):
Which actually allows you to have the domain name.

Speaker 3 (23:24):
Servers, Mozilla, all these great organizations we have to do
all that work again in the next six months. If
we don't do it, we'll end up in this highly
dystopian world where somebody else is controlling all the AI
and all the AI agents.

Speaker 1 (23:39):
So out of curiosity, when we're in this future world
where we've created this species that runs parallel to us,
all these AI agents running around at much faster speeds
than us, just out of curiosity, what do you think
the Internet slash Worldwide Web is going to look like
for us? Will people still go and look up websites
or instead, will there be another version of websites? My

(24:02):
AI agent can go and buy me a shirt and
not you know, not have to look and scroll in
the way that we do.

Speaker 3 (24:09):
Yeah, I mean a lot of a lot of things
that have been developed online have been for kind of
human conception and human interaction. And it's kind of clunky
the fact that the whole world interests with the internet
with your fingers on a keyboard. It's kind of kind
of strange if you think about it. And so clearly,
you know, the web of the future will be multimodal.
We can talk to each other, you know, we'll have

(24:30):
you know, I'll have you know, some an AI agent
just talking in my ear almost like the movie hear right,
So you can imagine the world becoming much more you know,
much more human again, you know, as opposed to forcing
us to interact through this through this keyboard. So that's
kind of the possibility of human interaction. And we'll have
other technologies going along with that, like VR and AR

(24:53):
and new materials, science and new technologies.

Speaker 2 (24:55):
I think all that will be part of it.

Speaker 3 (24:57):
And then that's kind of like the human at individual level,
but at kind of at a structural level, at a
systemic level, you can imagine all these agents are doing
great work, you know, behind the scenes. It's making sure
you know, our our health is taken care of, my
finances is taken care of. I mean, think about there
are millions of people who are not financially savvy and
they have to proactively go and check online or with

(25:19):
their financial advisor about what they should do, you know,
but imagine if there's an agent who's helping them in
their financial decisions.

Speaker 2 (25:26):
You know. Think about education.

Speaker 3 (25:28):
There are billions of people out there who don't get
good quality education, and people often talk about how AI
is going to improve education, but Asians would have much
more an impact because they can be highly personalized and
they can really understand the context and to you know,
use that AI. You don't have to make an API
call to somebody in California. You know, you could be
using you on your own device. You can be modifying it.

(25:51):
And so I think and we'll be treating our agents
almost like Tamagachi. You know, remember those toys from you know,
will be.

Speaker 1 (25:58):
Kind of just as a remind the listeners. It's you
have to pet them or take care of them, right,
and then they stay alive.

Speaker 3 (26:04):
And yeah, there's are very addictive toys that are popular
in the eighties and nineties from Japan which you are
to constantly kind of nurture them and feed them and
pet them and they stay alive.

Speaker 2 (26:15):
All those are purely digital.

Speaker 3 (26:16):
Characters, and Asians are going to be something like that.
You know, we really care about, you know, nurturing them.
And my guess is that we'll spend like one third
of our life every day just making our Asians better
and teaching them and coaching them and talking to them.
And these Asians will be such a significant part of
our life, our health or education, our social life, our

(26:37):
livelihoods and so on that we are very kind of
attached to our Asians, hopefully in a positive way. So
it will not only make kind of a daily interaction
more human because you're not constant looking at the screens
and computers all the time. At the same time, these
Asians will be doing things beyond you know, kind of

(26:57):
you know, kind of behind the scenes, and make it
again a more democratic world, you know, kind of reduce
the friction between people who have information and people who
don't have information, or people have a good rollodex or
they don't. So all this information of symmetry, you know,
power of symmetry could be addressed if everybody's empowered with
their own agent.

Speaker 1 (27:33):
Okay, So that gives us a really good sense of
what this near future world is going to look like.
It may be that the World Wide Web, which we've
grown so used to and that seemed like the future,
it may be that that's already becoming the past in
the sense.

Speaker 2 (27:49):
It builds on top of each other. That I wouldn't say.

Speaker 1 (27:51):
Agreed to the past, agreed, but the idea of tapping
keyboard for a particular website, or going to Google and
doing a search and then flipping through all these different websites.
It seems like that's already.

Speaker 3 (28:02):
Yeah, I think, you know, our agents will be more
like our pets or maybe a baby.

Speaker 2 (28:08):
We are raising.

Speaker 3 (28:09):
It'll be much more interactive, much more fun. I mean,
we'll still need computers to do like very information heavy
aspects of our life, but rest of the day, we
wouldn't be like we want be writing emails, you know,
we won't be kind of scrolling through the sea which
rastwenty you want to go to, or we won't be
you know, doctors want to be sitting in front of
the EHR report records to decide how they should help

(28:31):
their patients, and financial folks will not be spending time
in front of huge spreadsheets. A lot of those things
will be happening behind the scenes. We still need spreadsheets,
and we still had to go to the raw, you know,
source of truth. But most of the time you'll have
this you know this, this agents as your companions.

Speaker 1 (28:49):
So what are the legal consequences if you're just blue
skying and thinking about the future. If your agent goes
off and does something and it ends up doing something
illegal or and that causes big damage, who is responsible?
You are the agent?

Speaker 2 (29:04):
This is a great question.

Speaker 3 (29:05):
Yeah, I think it's it's like saying, you know, if
you have a six year old child and they do something,
you know, other parents responsible.

Speaker 2 (29:11):
I guess I don't have an answer for it. A
little bit.

Speaker 1 (29:13):
Different because the child, the legal system understands that the
child is separate. You try to do your best raise
in the kid, but the k goes. Often it's an
independent entergy from the point of view the legal system.

Speaker 2 (29:23):
Right, I mean, I mean today's world.

Speaker 3 (29:25):
You know, if there's a you know, if a child
uses a gun, you know, the parents are considered responsible.

Speaker 2 (29:31):
Again, I'm not an expert in this topic.

Speaker 3 (29:33):
I would say that in a lot of these protocols
and systems will emerge over time. And yes, the agents
will go to agent schools to get trained. There will
be a char departments, you know, for if the agents
are misbehaving, and there will be you know, kind of
societal values that will be infused into the agents with

(29:55):
some character and sticks.

Speaker 1 (29:56):
Wow, this is going to take a while after the
legal system to work out, because you can imagine a
bad actor programs a bad AI agent to go off
and do stuff and then it says, well, it wasn't
really me. I didn't know this was in the code,
and yes, it caused all this problem.

Speaker 3 (30:11):
And I remember this is going to be agents will
may or may not have may or may not have
kind of national boundaries. So you know, a law in
Indonesia may be different from law in Malaysia, So what
does it mean. So a lot of the work we
do in Project Nanda is thinking about this notion of
geo fencing and creditialing and verifiability and so on. So
a lot of our research goes into kind of cryptographer,

(30:32):
get very fine that it's doing the right thing, also
making sure that there's a reputation system that's you know,
that's encoded and so on. So a lot of thinking
about the good behavior and the bad behavior is encoded
in how this agent Internet is being developed.

Speaker 1 (30:47):
Quick question, is there a way for listeners to see
what Project Nonda is up to?

Speaker 3 (30:51):
Yeah, you can, you know, you can go create your
own agent David in thirty seconds and we'll post a
link in the in the video. Go to join thirty
nine dot org ji n three nine dot org. You
can create your own agent in thirty seconds. It'll be
live online. Anybody else can interact with your agent if
you want, you can attach more information to it. You

(31:12):
can also start making money from it. Right now, we're
just using points system because the research so you can
make points. So for example, there could be you know
a David agent and you're a fancy neior scientist, so
you could say, hey, if you want to talk to
my agent, I'm going to charge you in a dollar minute.
At the same time, when you go to use somebody
else's agent, maybe it costs you ten cents a minute,
right and so on. So or it can go and

(31:34):
join thirty nine dot org, which is kind of an
open website to start playing with your agents, create your profiles,
start make money off it, you know, send links to
others and so on.

Speaker 2 (31:45):
So kind of early days.

Speaker 3 (31:46):
But yeah, definitely can go to projectnan dot org, which
is the main website, which is all the technology, all
the source code, all the research papers, you know, all
our industry partners, university partners working together, so you can
look got all of that, or you can just go
to jointdy Night dot org and get started.

Speaker 1 (32:03):
Oh amazing. Okay, So now here's the part I want
to get to. Is something that you and I have
a common interest in is the analogies between the brain
and computation in general. And what's interesting is this has
a long history. You know. Johnny von Neuman, for example,
looked to the brain and said, oh, what I'm building

(32:24):
with this digital computer is like a brain. And nowadays
people often look at the brain and think, oh, it
is like a digital computer, and so on, and so
they're always looking to each other. But here's the interesting thing,
as I think you know, in my book Incognito a
while ago, I wrote that the brain, really the way
to think about this is like a team of rivals,
where you've got all these different neural networks that are

(32:45):
all trying to drive this ship of state, and they
have different desires and different needs and different outcomes that
they're looking for. And your behavior at any given moment
is who wins the parliamentary vote. And this is why
humans are so complex. This is why you can get
mad at yourself, or cuss at yourself, or cajole yourself,

(33:08):
or contract with yourself, because you know, if you were
a single thing, we'd say, wait, who's talking to whom exactly?
But what's happening is you're made up of all these
different things. So as a rough analogy for today's conversation.
It's like, we have a bunch of different agents that
are running in there, and somehow you get this emergent
behavior that we call you. But of course you're not

(33:29):
the same person from from moment to moment, no hour
to hour. So I want to explore this analogy here.
What's your reaction to this when you think about the
similarity between a future society of AI agents and what's
going on in the brain.

Speaker 3 (33:42):
I mean, you know, Marvin Minsky also that Mighty Media Lab,
you know, talk about the society of the mind and
this notion that there's no single brain. But you have
all these components of brains you call them asians, and
they're all working together without a single orchestrator, and each
of them are seemingly doing meaningless task, but you know,
together they you know, they achieve something amazing.

Speaker 1 (34:03):
And as you know, by the way, what I think
Minski's model is missing is competition. And that's what seems
to figuring on in the human brain, is that these
little agents are working together. They're all competing slash cooperating.

Speaker 3 (34:16):
Yeah, and David, you wrote an article in I believe
twenty eleven eleven magazine where you talked about the conflict
as an important component of all these agents talking to
each other.

Speaker 2 (34:26):
I thought that was amazing that you.

Speaker 3 (34:27):
Wrote that even before the twenty thirteen alex netpaper which
brought us, you know, a lot of this amazing new
machine learning benefits, So you're like two years before that.
And so this notion of cooperation competition between this micro
ais or subcomponents, I think it's a very important idea.
And the fact that there's no global orchestrator either. Even

(34:50):
our prefrontal cortex is only doing integration but not doing orchestration, right, So.

Speaker 2 (34:56):
That's an important concept.

Speaker 3 (34:57):
So if you take that analogy further up, if you
use like that's decentralization, right, because you're letting all these
subcomponents that are doing highly specialized tasks and there's an
emergent intelligence from that is exactly a positive of how we.

Speaker 2 (35:12):
Should be thinking about centralized intelligence.

Speaker 3 (35:15):
And I think Open AI and you know, Google, they
all realized because what happened in the research world is
we started going towards concepts like mixture of experts, Like
what is that mixture of experts? Is highly specialized components
of machine learning models that work together to figure out
what you should do. And then they have these concepts

(35:36):
like routers, where when a task comes in, you decide
which part of the which experts within this mixture of
experts should be used. If you remember in the beginning
when charge epedy came out, it couldn't do math very well,
and of course you shouldn't be used a language model
to do math.

Speaker 2 (35:53):
You should be routing it to a simple calculator.

Speaker 3 (35:56):
So and now you know there are many other search
branches that happen in any model that you use, lessia
on chargipat. So what folks have realized very quickly is
that actually training one giant model doesn't make sense. We
wore to kind of route it to all this other intelligence. Now,
if you take that analogy further by the way, and
then we came to do this this modern world of

(36:16):
chain of thought or tree of thought and then reasoning
and all and all of that is basically pointing us
to this one direction, which is the concept of society
of the mind in a cooperation competition, the conflict that
you talked about.

Speaker 2 (36:31):
So it's moving in that direction.

Speaker 3 (36:33):
So why don't we imagine what could happen over time,
which is it's not only that, you know, all those
subcomponents of AI are working in this one machine of
chargepat but actually they're working across the world. And there's
an amazing health agent that's running in Manila, There's an
amazing legal agent that's running in you know, in in Rio.

Speaker 2 (36:55):
You know, there's an.

Speaker 3 (36:55):
Amazing softwaregent that's running in Bangalore. There could be all
these things running.

Speaker 2 (36:59):
All over the world.

Speaker 3 (37:01):
They are updated, you know, through innovators locally and through
which everyone gets benefits of this global intelligence. So we
can see that progression happening. As you said, in the
Society of the Mind, one missing piece was this notion
of rivalry or competition and conflict. Another piece that's missing
or is required, I would say as we go to

(37:22):
this decentalization is the.

Speaker 2 (37:24):
Fact of incentives.

Speaker 3 (37:26):
You know, why should this folks actually participate, and when
they do participate, how to make sure there are no
bad actors? And so if you kind of take that
thought process forward, you realize, wow, first of all, it's
not centralized, is decentalized. Second is that there is not
only cooperation, but there's competition and rivalry. Okay, but to

(37:48):
support that we need different incentives for them to participate.
But then if you proved incentives, bad actors will have
incenter to participate although they have nothing to contribute, or
they're maliciously contributing. So we need a way to deal
with those productors. And then you say, okay, at the
end of the day, we got all the thousands of
millions of ages doing strange takes. How to bring all

(38:08):
of that together, right, So you need some kind of
integration of that. So that's basically what Project Nanda is.
That's basically what Descent Last Day is. The same way
of brain and society's work is going to get mimicked
in the future of the Internet.

Speaker 1 (38:28):
That was my interview with Ramesh Rascar. As I've talked
about on earlier episodes, we live in a world increasingly
shaped by invisible architectures, by algorithms we don't see, by
recommendations we don't question, and centralized systems that quietly make
decisions on our behalf. But what Ramesh is asking us
to picture is something different. He's asking what if AI

(38:52):
could protect your privacy instead of compromising it. What if
sharing data didn't mean giving up control over it? Buried
in here are there lots of questions about trust, about power,
about what kind of society we want to build, and
Remeasser would be the first to admit that there are
so many questions that still need to be worked out.
But let me say two things. First, I think it's

(39:13):
so deeply important that people like Ramesh and all of
his colleagues are pulling up chairs at the table to
try to figure out the structure of these questions because
agentic AI is already here and its future is only
getting more complex and fast and ubiquitous. So this new

(39:34):
type of society that we touched on today, with this
parallel species running at speeds we can't even comprehend all
this is an inevitability and it's not even particularly far away.
So I'm happy to see collections of smart humans trying
to build the right lanes and guardrails. Second, if the

(39:55):
brain teaches us anything it's that intelligence doesn't have to
come from top down command in the brain. It arises
from dialogue and disagreement and competition among subsystems that don't
always see eye to eye but still somehow make it work.
So in the AI world, what if intelligence could emerge

(40:16):
from a constellation of local minds, each acting in context,
but contributing to something larger than themselves. I'm really interested
to see what we are going to learn about intelligence
in ten years when we're looking at billions of agents
running around and we're asking if that collection somehow gives

(40:38):
us artificial general intelligence. So that's the current game, and
in Ramesha's view, our best shot at building ethical, robust, intelligent,
and democratic AI is to let go of the fantasy
of central control and the despotism of oligopolies and instead

(40:59):
learn from them messy, resilient genius of our own biology.
Go to eagleman dot com slash podcast for more information
and to find further reading. Join the weekly discussions on
my substack, and check out and subscribe to Inner Cosmos
on YouTube for videos of each episode and to leave

(41:19):
comments until next time. I'm David Eagleman, and this is
Inner Cosmos.
Advertise With Us

Host

David Eagleman

David Eagleman

Popular Podcasts

On Purpose with Jay Shetty

On Purpose with Jay Shetty

I’m Jay Shetty host of On Purpose the worlds #1 Mental Health podcast and I’m so grateful you found us. I started this podcast 5 years ago to invite you into conversations and workshops that are designed to help make you happier, healthier and more healed. I believe that when you (yes you) feel seen, heard and understood you’re able to deal with relationship struggles, work challenges and life’s ups and downs with more ease and grace. I interview experts, celebrities, thought leaders and athletes so that we can grow our mindset, build better habits and uncover a side of them we’ve never seen before. New episodes every Monday and Friday. Your support means the world to me and I don’t take it for granted — click the follow button and leave a review to help us spread the love with On Purpose. I can’t wait for you to listen to your first or 500th episode!

Stuff You Should Know

Stuff You Should Know

If you've ever wanted to know about champagne, satanism, the Stonewall Uprising, chaos theory, LSD, El Nino, true crime and Rosa Parks, then look no further. Josh and Chuck have you covered.

Dateline NBC

Dateline NBC

Current and classic episodes, featuring compelling true-crime mysteries, powerful documentaries and in-depth investigations. Follow now to get the latest episodes of Dateline NBC completely free, or subscribe to Dateline Premium for ad-free listening and exclusive bonus content: DatelinePremium.com

Music, radio and podcasts, all free. Listen online or download the iHeart App.

Connect

© 2025 iHeartMedia, Inc.