Episode Transcript
Available transcripts are automatically generated. Complete accuracy is not guaranteed.
(00:00):
That was really the core idea was to make the EVM execution a
lot more performant and then build a consensus mechanism that
can keep up with that really high execution throughput in
order to maintain a really high degree of decentralization and
then full decentralized block production.
I think Solana is processing somewhere between 2 and 3000
(00:20):
transactions per second. You know, where as Monad is
supporting over 10,000 transactions per second and then
also Solana is using relatively high hardware requirements.
I believe the requirement is 256gigs of RAM right now, whereas
monad requires nodes to have 32 gigs of RAM.
Welcome to Epicenter of the show, which talks about the
(00:42):
technologies, projects and people driving decentralization
and the blockchain revolution. I'm Brian Crane and today I'm
speaking with Keoni Hong who is the Co founder and CEO at Monad
Labs. Monad is one of the most
ambitious and interesting new layer ones coming up, expected
to launch next year. So I'm really excited to get
(01:05):
into the details of that with Keoni.
Now before we get into it, we just want to share a few things
from our sponsors this week. If you're looking to stake your
crypto with confidence, look no further than Course 1.
More than 150,000 delegators, including institutions like Bit
Go, Pantera Capital and Ledger Trust course one with their
(01:27):
assets. They support over 50 block
chains and are leaders in governance or networks like
Cosmos, ensuring your stake is responsibly managed.
Thanks to the advanced MEV research, you can also enjoy the
highest staking rewards. You can stake directly from your
preferred wallet, set up a whitelabel note, re stake your assets
on Eigenia or Symbiotic, or use their SDK for multi chain
(01:47):
staking in your app. Learn more at Chorus .1 and
start staking today. This episode is proudly brought
to you by Gnosis, a collective dedicated to advancing a
decentralized future. Gnosis leads innovation with
Circles, Gnosis Pay and Metri, reshaping open banking and
money. With Hashi and Gnosis VPN,
(02:09):
they're building a more resilient, privacy focused
Internet. If you're looking for an L1 to
launch your project, Gnosis Chain offers the same
development environment as Ethereum with lower transaction
fees. It's supported by over 200,000
ballot errors, making Gnosis Chain a reliable and credibly
neutral foundation for your applications.
(02:30):
Gnosis Dow drives Gnosis governance, where every voice
matters. Join the Gnosis community in the
Gnosis Dow forum today. Deploy on the EVM compatible
Nosis chain or secure the network with just one GNO and
affordable hardware. Start your decentralization
journey today at nosis dot IO. Cool.
(02:53):
Well, thanks so much for coming on Kioni.
It's it's really great to have you on.
Yeah. Thank you for having me, Brian.
So I, I like to start by kind ofasking people about how their
crypto journey began. Like how did you first become
interested in crypto? Yeah.
(03:14):
So I'm, I'm 35. I've been working for like 1314
years of this point. The 1st 10 years of of my career
were spent in high frequency trading, working as a quant.
So I traded in a number of traditional markets, mostly on
the futures side. These are really high volume,
(03:35):
like very liquid markets like S&P 500 futures or 10 year
treasury no futures or crude oilfutures.
Building performance trading systems that trade in these
really efficient markets generating very, very small
statistical profits over a long period of time and over some
(04:00):
period of time, our team startedto trade crypto initially just
sort of like without a lot of understanding about the
underlying assets that we were trading.
But crypto is really interestingfrom a trading perspective
because there's so many different assets, like there's
so many different coins themselves, but then a lot of
(04:22):
different perpetual futures and deliverable futures and many
different exchanges. So there's a lot of interesting
correlation structure across thespace.
And that was initially what got me and, and my trading team at
Jump Trading involved in crypto.But then at the same time, I was
also just like getting really into crypto Twitter and enjoying
(04:47):
all the narratives, all the memes, all the characters in in
2020 and 2021. So that was sort of the
introduction of the space. And then my team ended up
merging into the crypto team at jump trading in mid 2021 and
started working on Solana D5 forfor a little bit of time.
(05:07):
Then at that point I was fully professionally in crypto prior
to starting Mon out at the beginning of 22.
Right. So you basically first started
trading, you know, just on centralized venues and then also
started doing more like on chainstuff.
Yeah, we, we were from a tradingperspective, Yeah, I was more
(05:31):
focused on the centralized side.I mean, in a personal capacity,
I was trying out different Dexesand, yeah, buying NFTS and yeah,
getting rug done on coins. Yeah, on meme coins of
2020-2021, aside from Dogecoin, which which did really well, but
(05:54):
yeah, it was mostly on the in the professional capacity, more
on the centralized side. I'm curious sort of from all
your knowledge you had about, you know, how markets work on in
the traditional markets work andthen when he kind of came to
crypto, what were the things that you know, just seemed most
weird to you or like most surprising?
(06:17):
I think two things that come to mind.
The 1st is just the general inefficiency of the markets and
the typical spreads that you endup paying as as a retail trader.
Like in the traditional markets of course depends on the
liquidity of the asset. But for, you know, most normal
(06:39):
futures or most normal equities,people will end up paying single
digit basis points in, in spreadand, and in slippage like spread
and slippage combined, like their execution price is never
more than, you know, like one ortwo cents different from
whatever the midpoint of the market is.
(07:01):
And this is for equity that's trading at $100 or $200 or more.
So single digit basis points. And then if you go to the D5
space, it's, you know, the default is like a 30 bit fee.
And then in addition, like when you trade, there's like some
impact. And then when you price him like
(07:23):
some slippage and then you end up getting sandwiched, attacked.
You know, it's just like it's very common for people to end up
paying like 50 basis points or 1% or 2% in slippage.
So it's just, you know, like 2 orders of magnitude more than
you would in, in the centralizedspace.
So that's the first thing. And then the second thing is
(07:44):
just the fact that when a transaction is submitted, it's
in a pending state in most blockchains.
Therefore, it's subject to the actual discretion of the the
block producer in terms of the ordering, which then has an
effect on the execution price aswell.
And then tell me what? What's the origin story of
(08:06):
Monad? Is that what you did after jump?
Yeah. So I met James Hunsaker who's Co
founder and and CTO here at Monad Labs in 2014 when we were
both a jump on the same trading team and we've been working
together ever since then. We both left jump at the very
(08:31):
beginning of 2022 and then started Monad shortly after that
along with the third Co founder.And what was the, what was sort
of the vision that like you would think that cost you say,
OK, this is the thing I want to work on?
Yeah, I think what with any start up, you really need an
(08:54):
idea and an idea of also how that idea fits into the broader
landscape of, of, of the space that you're working in and, and
a clear problem that it's solving.
For us prior to starting Monad, when we were at jump trading for
about 6 months, we were mostly working on Solana Defy.
(09:16):
And at that time in 2021, we could see that there were a lot
of advantages to building on Solana because you know, Salon
was offering like 500 to 1000 real TPS of throughput, so much,
much more throughput than Ethereum.
Ethereum was about 10 TPS and also much more throughput than
(09:38):
any of the roll ups, which were you know, also in the like 10 to
30 TPS range, much more throughput than other EVM layer
ones which are also in the like 10 to 100 TPS of throughput
range. So there are a lot of advantages
to Solana, but at the same time builders were building in Solana
had to build with a completely different bytecode standard.
(09:58):
They couldn't reuse any of the work that they done if they
built for the EVM already. They couldn't use any of the
really rich array of tooling andlibraries and so on.
And we just realized that we could give developers the best
of both worlds and give them both performance and
portability, focusing very heavily on optimizing all parts
(10:20):
of the EVM stack. And that was really the core
idea was to make the EVM execution a lot more performant
and then build a consensus mechanism that can keep up with
that really high execution throughput in order to maintain
a really high degree of decentralization and then full
decentralized block production. So I'm curious because I feel
(10:42):
like this EVM discussion, you know, it's been a very long
standing discussion where, you know, some people will be like,
oh, EVM is is fine. And then a lot of people be
like, oh, it's kind of like this, you know, odd thing that
was created early on and just because it was the first it, you
know, it has so much, so much adoption.
Do you feel like is the main, like how do you feel like
(11:07):
comparing the EVM with other VMS?
Do you feel like using the EVM is the biggest advantage?
Is just existing developer ecosystem tooling contracts
exist or are there other kind ofpros and cons versus other VMS?
Right, I think that we'll see. EVM is the the just to state of
(11:32):
very explicitly. EVM is the low level by code
standard, but then typically developers are building in
Solidity or occasionally in Viper or Huff or other front end
languages and that that high level language gets compiled
down to EVM. But it is really true that
(11:53):
almost all capital on chain is in the EVM right now.
Like over 90% of all TVL on chain is in EVM dapps.
Additionally, there's a ton of existing libraries and tooling.
Almost all the applied cryptography research is being
done in the context of the EVM as well.
Like people don't really think about that part, but you know,
(12:14):
very much on the on the researchside.
And a lot of 0 knowledge research is being done
ultimately to interface with theEVM Bycode standard as well.
So it's really like for developers, they would probably
prefer to build for the EVM given all these network effects
(12:37):
and the fact that by building for the EVM, they're not tied to
like a, you know, one ecosystem or a very small subset of
ecosystems. They can really deploy their
their work almost anywhere in the future.
So it's really just future proofing the work that they've
done, in addition to having access to all of the existing
libraries and tooling. Do you think this is something
(13:00):
that like kind of network effectis going to just continue
compounding and grow bigger in the future or you feel other
things like you know Slana VM orlike move VM or any of these
others have a real shot at at some point taking over?
I think the future is very fluid.
It really depends on the path ofleast resistance for developers
(13:27):
and our team thinks that the work that we're doing to make
the EVM lot more performant so that then developers are really
not forced to choose between performance and portability will
have an impact on how people think about like what VM to
build for. You're right that in the current
regime, at this very moment, developers might be moving over
(13:51):
to Solana because they feel thatthat's where they'll get the
best performance, and also because there is a lot of
excitement around the Solana ecosystem right now.
But that all just can change with the introduction of new
technology. So when it comes to scaling, so
moving all the bottlenecks and making sort of the EDM very
(14:14):
performant, what are the things that you guys did to achieve
that? Yeah, there are at least four
different major areas of optimization or or four
different new architectures thathave been introduced in addition
to just more generally optimizing.
(14:34):
First of all, monad is completely from scratch.
So all parts of the stack are are built from scratch for
performance in C++ and Rust. And then we've introduced
several new architectural improvements, which I can
describe sort of briefly. I think the main way to think
about it is that there is execution improvements, there is
(14:55):
consensus improvements, and thenthere is improvements in how
consensus and execution interactwith each other.
So the first two improvements that I want to mention are both
related to execution. The third one is related to
consensus, and the fourth one isrelated to how consensus and
execution interact. So on the execution side, the
(15:17):
two things that we've, the two major things that we've done are
introducing a new database and introducing optimistic parallel
execution. And the reason why these two
things are both needed in order to make execution a lot more
performant is because, well, first of all, to talk a little
bit about the job of execution. So like in Ethereum, there's a
(15:40):
block, it has a whole bunch of individual transactions that are
supposed to be run sequentially in order to get to the end
result. And initially you might think
that there's no way to parallel process because the true state
of the world is the state of theworld assuming that all those
(16:01):
transactions are run one after another.
So for example, if I start with 200 USDC in my account and then
the first transaction is me sending 150 USDC to you, and
then the second transaction is me sending 100 USDC to my
brother. So you know, the first
transaction, when it gets executed, it'll take my balance
(16:23):
from 200 to 50. And then the second transaction
when it runs will take my balance from 50 to still 50
because that transaction will fail because it was trying to
send 100, but I don't have 100 anymore.
And so I think this is an example that shows you that you
kind of do have to execute the transactions, at least in some
(16:43):
sense serially because you need to have done that first
transaction where I was sending you 150 before we can actually
get the correct result of the second transaction.
If you ran them in parallel, just totally naively, we would
think that both of them succeeded.
So there is a notion in which sequential execution is needed,
(17:06):
but in Monad we introduce optimistic parallel execution to
do a bunch of work in parallel, but then do the right amount of
bookkeeping so that we're keeping track of inputs and
outputs, IE storage slots, both the values that were read in
from a storage slot and then thevalues that are written to a
storage slot. And although we do a bunch of
(17:28):
work in parallel, we do bookkeeping and then commit
those pending results in the original serial order and re
execute any unexpected results where the inputs have since
changed. And so in that particular
example, we do those two pieces in parallel.
We would get pending results forboth of those two transactions
and then the second pending result.
(17:49):
We would then invalidate and re execute because we would realize
that it was done wrongly the first time.
OK, because I think some other chains they were like first try
to check like OK, what contractsdoes it touch?
And then if it doesn't touch this contracts, then you say,
OK, you can paralyze it. But in your case you you just
execute it and then you see afterwards what it touches and
(18:11):
then you kind of like roll things back almost or.
Yeah, that's, that's exactly right.
Although I wouldn't call it rolling it back necessarily
because nothing has really been committed yet.
But but yeah, that's that's a good characterization though
that a lot of block chains, theydo parallel execution require
(18:34):
explicit dependency specifications.
So Solana is a good example of this, where when you submit a
transaction, you have to say theexact pieces of state that that
transaction is going to touch and indicate which ones are
going to be read and which ones are going to be written to, AKA
the read write locks. And then that information is
(18:56):
used as an input to the salon, ascheduler to make decisions
about how to parallelize work. And if you misspecify one of the
dependencies, like if the transaction tries to go out of
bounds and touch a piece of state that wasn't specified
ahead of time, then it just automatically fails.
So that's sort of a more explicitly defined model for
(19:19):
transactions. And I think you would think that
it might allow for more performance because you have all
the dependencies upfront. But in practice, what we found
is that you can get a lot of performance from this pure
optimistic approach where you just assume that everything is
OK, that you know you're just reading dependencies on the fly,
(19:39):
but then you put the results in a pending state and you commit
them serially and re execute if you need to.
Maybe we can come back to this later.
But you know, one question I'm very curious about, it's the
whole question of like MEVPBS because that I think maybe
relates to this as well. But maybe let's go through the
(20:00):
the You mentioned 4 things, so let's go through the other ones
first. Oh, yeah, sure.
So I was telling you about, I guess one of two major
improvements that are improving execution.
So I told you about the first one already, which is optimistic
parallel execution. The second one related execution
is this new database that we've built called Monad DB.
(20:22):
And so the thing to know about this particular topic is
Ethereum stores all of its statein a Merkel tree.
And the benefit of storing all the state in the Merkel tree is
that the Merkel tree has a Merkel root, which is
essentially a check sum over allof that state and in that way a
commitment to all of that state.So like if you and I are both
(20:45):
running full nodes and we both have the same Merkel root at the
top of our tree, then we both know that literally every single
piece of state is exactly the same on both of our machines.
So we don't have to go like state by state in compare all of
them. We can just compare the Merkel
roots. It's a very efficient way of
ensuring that we're on the same page.
(21:06):
And the Merkel tree is, you know, a successively hashed tree
where every parent is a hash of all of its children.
And then you just kind of like propagate that all the way up to
the root of the the tree. And the thing to know about
existing systems is that this Merkel tree structure is
generally embedded inside of another database.
(21:30):
For Go Etherium, it's Level DB or Pebble DB in Aragon and Rath
it MDBX, which is another database.
But any, in any event, like all of these Ethereum clients, they
use a different database as an actual store for all this Merkel
(21:51):
tree data. And it creates a lot of
interaction because each of those databases themselves have
another tree under the hood that's being used to define how
state is or how data is being stored on disk.
So when you want to navigate from the root all the way to one
of the leaves in the Merkel tree, each time you visit a
node, you're actually triggeringanother look up into another
(22:12):
tree. And so there it's just really
inefficient to let go navigate from the root of the Merkel tree
down to any particular node because each node that you visit
is going to trigger an entire look up into another tree.
So with Mona DB, we apologies for the long explanation there,
but for Monad DB, we're actuallystoring the Merkel tree natively
(22:33):
on disk. So there's an exact mapping
between how the tree is laid, the Merkel tree is laid out and
the, you know, the locations on disk, the pages.
And then in addition to that, there's also a lot of other
optimizations like introducing asynchronous IO support so that
many pieces of state can be readat the same time, bypassing the
(22:55):
kernel, A bunch of other optimizations.
But what you really get is a much more efficient look up
system and also one that can support effectively parallel
reads, which then interacts really well with the optimistic
parallel execution. Because an optimistic parallel
execution you are running many transactions, each of which is
at some point encountering S stores and S loads and
(23:19):
triggering database lookups. And so while all these database
lookups are being triggered in parallel, by running all these
transactions in parallel, the the disk is also able to service
a lot of those requests in parallel and start, you know,
serving serving state back to each of those transactions.
OK. So basically what you're saying
(23:41):
is like this kind of this Ethereum has this like tree
structure defined where you knowthe the data, the state is
hashed and then you have a root hash.
And then that's implemented on top of other tree structures
which depend on the different databases.
So you have sort of the three onthree, which then makes it
(24:01):
deficient, whereas you guys justhave basically use the
underlying tree structure of thedatabase to just store
everything. So you kind of get rid of one
layer of complexity there. Yeah, that's exactly right.
And that's super important because all of this data is
living on SSD. And the mental model that you
(24:24):
should have about an SSD is thatit supports a lot of a lot of
work being done and a lot of look UPS being done in parallel.
So like a good SSD is a million or more IOPS IO operations per
second, but the latency of each one of those look UPS is
somewhere between 40 and 100 microseconds.
(24:47):
So you can think of it as like abottle that has a really wide
bottleneck and you want to be able to stick a whole bunch of
straws into the bottle at the same time.
And you know, like put all the straws in your mouth and suck a
whole bunch of juice out of the out of the jar all at the same
time. But there's the straws are long,
like there's a long amount of latency to look up any single
(25:09):
piece of storage. So in the context of that nested
tree structure that we were talking about, where you know,
you're going to end up having togo back and forth with the disk
a whole bunch of times just to get one piece of state.
And when you can reduce the number of back and forths
substantially, you can get a lotmore throughput out of the
(25:30):
system. And so that is primarily and now
an advantage also because I can imagine there's different
advantages to this, right? Like so 1 is that when you
execute a lot of transactions, you can just do it faster
because I maybe it also makes like running a node more
efficient. You know, if it's just like a
normal node where you look up the state that doesn't that's
(25:53):
not like a validator or. Yeah, that's right.
The database is is just how all the state for any node, whether
it's a full node or like a non validating full node or a
validating full node. Either way, they're still going
to need to respond to RPC calls and execute transactions.
(26:15):
Or go look up pieces of state, and all of those are accelerated
by having a better state back end.
Cool. So we have the this optimistic
parallel execution of the new database and then what are the?
What's the next one? Right.
So the two other areas that I mentioned are on the consensus
side and on the interaction between consensus and execution.
(26:40):
So to mention the consensus partfirst, we have a new consensus
mechanism called Monad BFT, which is pipelined to phase hot
stuff. So hot stuff is a consensus
mechanism that has linear communication complexity.
That means that as the number ofnodes in the network increases,
(27:05):
the amount of overall number of messages that need to be sent
increases linearly with the number of nodes.
As opposed to other consensus mechanisms like Tenderman AKA
comma BFT where it it's quadratic.
Like you know, it's the square of the the number of nodes in
(27:26):
the network. Because in tenderment it's all
to all communication, whereas inhot stuff it's generally one to
many, many to one communication.OK, so what's what's the kind of
number of validators that you expect that monad can scale to?
We expect a couple of 100 validators participating in
(27:48):
consensus on day one of Monad Mainnet and then we have a
slightly longer term road map toget that to the thousands.
But with the current consensus mechanism implementation, the IT
can support somewhere between 2 and 300 validators participating
(28:09):
in consensus. OK.
Because that's kind of similar to, I mean, OK, tenement chains
maybe is more like I would say sort of up to 200.
I think it's probably the most that people run.
What about their block time? Right, so monad BFT in monad is
being configured with a one second block time, as I said, 2
(28:33):
to 200 or more validators participating in consensus.
And the other thing I forgot to mention is that monad BFT has
single slot finality, which we think is really important
because you know, there's it affects the the bridging time.
Like if you're trying to bridge off of monad to another
(28:55):
blockchain, then typically, well, the bridge really should
wait until the chain is finalized before relaying a
message to any other chain. And in Ethereum where you have
somewhere like 12 to 18 minute finality, that means it just
takes a super long time to get your assets off of it.
(29:16):
Assets off of Ethereum to another blockchain.
Single slot penality really helps a lot with faster bridging
and faster settlement times. What are your thoughts on the
comparison of this versus the Solana consensus which is you
know this proof of history BFT like consensus I?
(29:39):
Think Solana has Tower BFT whichis not single slot finality.
I think that the some of the benefits right now of tower BFT
are support for the higher number of validators
participating in consensus. I think they have somewhere
(29:59):
between 2 and 3000 validators right now, which is actually
quite impressive. Like I know that the the meme on
Twitter at least in the a coupleyears ago, like people would
always say that Solana is reallycentralized.
And there are some aspects in which I would say that that that
could be true. For example, the high RAM
(30:20):
requirements of running a Solananode.
But on the other hand, from a pure consensus perspective, it
is impressive that Solana has 2000 plus validators
participating in consensus. But that does come at a cost and
the cost is really finality is like not single slot it, it
(30:41):
takes some time. I think in practice they say
like after 32 slots you can consider it to be finalized.
But it it's a, it's a little bitprobabilistic even there as
well. I'm at 32 slots in Solana. 32 *
400 milliseconds is what is it like 12.8 seconds?
(31:03):
Yeah. And I mean one big difference
with sauna, I guess that's not consensus related.
No, but they said there's no Merkel proofs, No.
Yeah, Yeah. That's another example where
Solana removed the merkelization, which I think has
an impact on bridging, has an impact on the ability to run a
(31:25):
light client. In Solana, they say that the
light client is like a node thatsubmits a transaction, that
creates a yeah, transaction thatgenerates a proof, but it
actually has to be included in the blockchain in order to
generate like that that proofs. It's kind of it's not really a
light client in the same way that when people typically talk
(31:49):
about light clients. And from the light client
perspective, those are. How do the light clients work
for Monad? So I think on on day one we will
not have and the light client implemented, but it it
definitely could be implemented in the future with the consensus
(32:12):
mechanism that exists. And yeah, there's there's no
impediment per SE because there is, there is a Merkel route and
there is like all the other things that you need.
OK, cool. So that was consensus.
And then let's go to the last one.
Monad implements something that we call asynchronous execution
(32:38):
and the way to understand that is for sort of best understood
by sort of just want to mention some numbers here.
So Etherium has 12 second block times, but the actual rough time
budget for execution is only about 100 milliseconds, which
(32:58):
is, you know, if you do the mathlike less than 1% of the block
time. And so that's, that's really
interesting that the budget for execution is so small, like such
a small fraction of the block time in Ethereum or another
block chains. And the reason for this is the
fact that execution and consensus are interleaved with
each other. So typically in block chains,
(33:21):
the you know what'll end up happening is the leader ends up
choosing a list of transactions and then executing all of them,
generating the Merkel root of the resultant state tree and
then sending that as a block proposal to everyone else, then
everyone else. Can you repeat that?
Can you repeat that? How they're interleaved?
(33:42):
Yeah, so in most block chains, execution and consensus are
interleaved. Execution is the single node
problem of, you know, given a list of transactions, what's the
end state? And then consensus is a
distributed systems problem of nodes talking to each other over
a network. The nodes, if they're globally
(34:03):
distributed, which they should be, then that means round the
world communication, which can take hundreds of milliseconds.
There might be multiple rounds of communication.
So I think All in all, what you can see is that consensus ends
up taking the vast majority of the block time and execution
ends up being squeezed into a very small fraction of that
(34:24):
block time because of the interleavedness of execution and
consensus. Yeah, yeah.
But for example, that's something where Solana is also
much more because of their system, there's much more
execution, right? Right.
So with Solana, they're also exploring the idea of
(34:46):
asynchronous execution. Tolle has written about this
couple times on Twitter, which has been really interesting for
us to see. But yeah, Monad has been, you
know, since day one, asynchronous execution has been
a big part of the overall design.
(35:06):
And asynchronous execution meansdecoupling those two problems of
consensus and execution from each other.
Asynchronous execution is the idea of moving execution out of
the hot path of consensus into Iguess what I would call a
separate swim lane that is slightly lagging consensus.
(35:27):
So it what what ends up happening is as soon as the
nodes in the network come to consensus about an official
ordering of transactions, then two things happen in parallel.
One is they can start consensus in over the next block.
And the other thing is they can all each independently execute
(35:48):
that list of transactions that they've just agreed upon.
And so when you move the execution out of the hot path of
consensus into the separate swimlane, you can massively raise
the budget of execution and use the full block time as opposed
to only a small fraction of it. So first the network agrees on
just the order of all the transactions and how they're
(36:10):
being executed. We just we just consensus on
that and then the execution happens.
That's right, yeah. OK, OK, very interesting.
So that also means, for example,if you're now a proposer or like
(36:33):
you don't really have like there's no nothing really you
can no discretion you can apply I guess.
That's a, that's a good question.
So as a proposer you still can apply discretion and we think
that in practice the proposers definitely would apply
(36:54):
discretion because that allows them to, you know, more
optimally choose the set of transactions that that will end
up paying fees. You were asking earlier about
how MEV will end up working and we do think that there will
probably be some sort of mechanism for network
(37:20):
participants to submit ordering preferences to the block
proposer in some way and attach a tip alongside that.
So then that tip revenue is additional revenue for the
proposer of. Course the proposers are the
proposers are the ones that comeup with the ordering of
transactions. Correct.
(37:42):
And and then that gets consensuson and then the execution part
is kind of that is basically predefined.
Correct. That's exactly right.
So I think you said it better than than I did the first time,
which is when you think about a blockchain.
A blockchain is really just a bunch of blocks that are in
(38:04):
sequential order and then in each block, a list of
transactions which are also in sequential order.
So if you sort of like let the blocks themselves fade into into
the distance for a second, you just literally have like a long,
long, long list of transactions that are all canonically
ordered. And if, if everyone or, like
(38:29):
Brian, if you and I are both running nodes and we both have
exactly the same list of transactions starting from
Genesis, then we should have thesame exact state of the world
because we're both applying the same transactions, 1 by 1 by 1
by 1 by 1, and each time making the exact same state transition
and getting the same state of the world, the same Merkel root.
(38:52):
So the ordering of the transactions purely determines
the execution. It purely determines, like, what
the correct state is. Yeah.
Basically. Like the only reason why we have
Merkel roots is so that we can check each other's work and make
sure that, you know, neither of our computer is like got hit by
cosmic rays and and made a computational error.
(39:13):
It's like just to make sure thatwe're we're doing the same
thing. OK, so, but it also means now if
I'm the proposer, I, you know, Iget all these transactions and
now I can come up with a list like an order of these
transactions. So then there is potentially all
the value right in changing thisorder, you know, accepting
(39:39):
different orders, putting in your own transactions, like all
that kind of stuff. Yeah, that's that's right.
So and and this is really just true of any any blockchain, but
yeah, almost every blockchain isleader based.
(39:59):
So there's a rotating leader, and then when it's your turn as
the leader, you. Sort of have the privilege of
being able to choose from various pending transactions
that are in the mem pool and assembling the next block, IE
assembling the next ordering of the next set of transactions
(40:20):
that get enshrined into the history of the blockchain.
Of course you need to choose valid transactions so that
everyone else like verifies themand accepts them and votes in
your block. But you have discretion over
what the ordering is. And what I'm saying is that that
choice about the ordering is alldetermined during consensus.
(40:42):
So you as the block proposer, you choose an ordering, you send
it to everyone, everyone looks at it, checks at all the
transactions are valid, does some other validity checks which
I can talk about in a second, and then votes yes.
And then once you know, super majority of the network, like
super majority stake weight has voted yes.
(41:02):
Now, that list of transactions is now enshrined in the history
of the blockchain, but that can be done actually before
execution of that list of transactions happens.
And so that's exactly what's happening in asynchronous
execution, is having the nodes all agree on the official
ordering. And then once they've agreed
upon it, then they can do two things in parallel, which are
(41:23):
like start working on consensus,seeing the next list of
transactions, but then in parallel go execute the things
they all just agreed upon. So I'm curious now if you do you
think that where this is going to is that we also going to have
a sort of proposal builder separation.
(41:43):
Where as a, as a proposer, I'm going to basically, you know,
plug into some builder or some other entity that, you know,
applies a lot of like intelligence to basically give
me an order that sort of produces the highest value, I
mean highest value for, for thatbuilder then.
(42:04):
And because that presumably there's, there's a lot of value
right in, in determining that order.
Is that where you expect things to go?
Yeah, I think so. So the the question of how the
ordering is chosen is sort of orthogonal from all the other
(42:26):
stuff that I mentioned. Like in the story I was telling
you about how the proposer just chooses an ordering and then
message it to everyone. They all agree upon it.
Now the order is enshrined and now everyone can go execute in
parallel to consensus in the next block.
That entire story just kind of black boxed the decision of the
(42:48):
proposer of how he or she chose that ordering.
And to your point just now, likethat decision could be made by
outsourcing that decision to like a third party network that
is, you know, like a proposer builder, like APBS type thing in
(43:11):
Ethereum where there's a system for people to be able to submit
bundles to the builders, Have the builders build a block, have
the block be submitted to a relay, which conducts a private
auction from different builders to choose the ordering that
offers the best overall amount, overall amount of revenue before
(43:33):
presenting some like the best option to the proposer and
having the proposer choose that and, and trying it all of that
still like kind of work could work the same way.
I don't know. The like it's, it's sort of out
of protocol, so we'll see how itdevelops over time.
But yeah, I think you could think of the ordering choice as
(43:56):
being orthogonal to the other considerations of how consensus
and execution end up working. But like by default when Monet
launches, you know, proposer andbuilder would be basically the
same. And then so it's not like
enshrined to propose a builder separation and then someone may
come and say, hey, we are going to create a modified client that
(44:19):
separates out the proposal or like or plugs in some kind of
builder mechanism. That's right.
Yeah. The the default mechanism for
block building is priority gas auction.
So choosing the ordering of transactions based on descending
gas bid. I know like I think PBS is
(44:41):
something that's fairly, you know, controversial.
I actually remember we did interior Vitalik at the ECC and
asked him the question, how do you, you know, do you feel it
was the right decision? And you know, I'm not really
sure if this PBS separation was the right decision.
(45:03):
Do you feel like is this a desirable end state, the kind
this kind of separation or because in the end right there
is potentially that a lot of value ends up kind of being
extracted by, I mean in Ethereum's case, right, we now
have I think really just two, two builders.
(45:25):
Who? They are completely dominant.
Like how, how do you want this ecosystem to evolve?
Do you hope that there's going to be a lot of builders who
compete or, or you sort of say like don't really care, It's
it's up to the market to figure this out or like how do you want
to see this evolve? Right.
(45:46):
Yeah, it's definitely a a complex topic.
I think that, you know, one thing that I hope we can
accomplish through what whateverthe the ultimate block building,
whatever block building mechanism becomes sort of the
the dominant 1. I hope that it involves a lot of
(46:08):
builders. Ideally it would be possible for
the proposers themselves to justrun the software themselves and
build an optimal block that way.It's it's much more
decentralized in terms of that. The thing that you were
mentioning like ultimately the set of agents that are
(46:31):
sequencing transactions, we wantthat number to be as high as
possible. I think in terms of the the
value capture in Etherium right now the proposers are capturing
a lot of the value because although there aren't that many
entities that have a high marketshare on the in the builder
(46:53):
space due to competitive forces,they do have to bid up to give
up most of the value that is in that block to the proposer.
So it is still very beneficial to the proposer network, which,
you know, that is sort of revenue maximizing for the
proposers, which is then good for all token holders because
(47:15):
anyone can, you know, has the ability to stake and thus, you
know, sort of get the returns ofa proposer or at least most of
them. There's also the part about how
over time we're seeing more value get captured by
applications through, yeah, mechanisms where the the builder
(47:39):
needs to rebate some of the value to the application itself.
And you know, then when the revenue flows back to the
application, that can be good because either it makes the
proposition of building an app more attractive, which I think,
you know, a lot of us in the space would agree is like a good
thing to happen because then it'll just encourage more
(48:04):
ambitious apps to get built. Or potentially the revenue,
although originally going back to the app then could flow back
to either LP's of that particular D5 protocol or maybe
a rebate to the to the taker to the end user.
So I think those are all different things that can evolve
(48:28):
over time. But yeah, I think to your
original question like we we do,I think it is objectively better
for there to be more actors thatare ultimately choosing the
ordering of transactions for thepurposes of censorship
resistance and decentralization.OK, cool.
(48:50):
Fantastic. So now I think we're coming to
the the fourth thing you mentioned.
Oh, actually I think I, you know, I've gone all the way out
the plank cause I've given the first two things were both
execution related, then the third third one is consensus,
and then the 4th was the separation between consensus and
(49:13):
execution. Oh, the asynchronous.
Yeah, yeah. OK, OK, Yeah, yeah, yeah, OK,
yeah, optimistic parallel, new database, the consensus and then
asynchronous execution. Cool.
So where do we end up with this?Like what is the the kind of
throughput that is achievable here?
(49:37):
Yeah, I think the cool thing about these different
technologies that we're introducing is that they all
stack on top of each other. So it's like, you know, it's
always exciting when you get a bunch of coupons in the mail and
then you know, 1 is like 50% off, 1 is 25% off, but they
actually stack on top of each other 'cause then you really
(49:58):
have a magnifying effect. So as I was mentioning before,
asynchronous execution on its own is a massive unlock because,
you know, an existing interleavesystems, only a very small
fraction of the block time can be used for execution.
Whereas by decoupling the two, basically the full block time
(50:22):
and expectation could be allocated to execution.
So we can pack a lot more, you know, literal, just like we can
pack in the full block time as opposed to only a small fraction
as execution time. And then if in addition to that
you have parallel execution, since we can do a lot more work
(50:44):
in parallel, and then we can also have a more performance
state database, we can respond much more efficiently to all
these S loads and S store requests that are happening in
parallel. Yeah, we have a more performing
consensus mechanism. We can keep up with this entire
system of execution that's fast,then we can really, really see
(51:04):
massive gains. The other analogy I want to
mention is that in asynchronous execution, like I, it reminds me
a lot of the, the movie Limitless where there's this,
which is sort of based on the premise that you only use 10% of
your brain. What if you could use 100% of
your brain? Like imagine how super powered
(51:26):
you would be. And of course, that's like a
little bit misleading 'cause I think that 10%, like the
denominator, has a lot of Gray matter like supportive tissue or
something in there. But it's not really true that
you can go from 10% to 100%. But it's a nice fantasy to think
about. And in this movie, there's this
guy who takes a pill that allowshim to go to use 100% of his
(51:48):
brain and he's he's just immediately super powered and
he's doing awesome. And then of course, there's like
a narrative arc where you know, by being so powerful, he gets in
a lot of trouble and then he's able to somehow work out of it.
So asynchronous execution is like that where you're going
from using only a small portion of the block time to using the
full block time. And then when you stack this on
(52:11):
top of these other improvements,we can really get significant,
significant throughput. So now back to your the question
you're actually asking. So Monad is supporting over 10
KTPS replaying existing Ethereumhistory.
So this is not 10,000 transfers or 10,000 ERC 20 transfers.
(52:33):
This is 10,000 real transactionsfrom the distribution of recent
Ethereum blocks per second, which ends up being about a
billion transactions per day or about 1 billion gas per second,
which actually is, is kind of funny.
Like now on Twitter, a lot of people talking about this goal
(52:53):
of Giga gas throughput or 1 billion gas per second, which is
the throughput that we're able to achieve on Monad by stacking
these different technology improvements while running on
reasonable hardware requirements.
Which I think is another thing to point out because a lot of
the goals of getting to Giga gasthroughput are assuming that
(53:16):
you're running a, you know, a centralized L2 where there's one
server and that one server has like a massive amount of RAM.
And, you know, it's like kind ofa supercomputer that doesn't
really work with the principles of decentralization.
Like we want to have a fully decentralized network where the
nodes are literally globally distributed with the full
(53:39):
overhead of consensus. You want anyone to be able to
run a node. Like it shouldn't be expensive
to run one of these nodes. So if you can get to a gig of
gas of throughput, but you have to have a massive like, you
know, a TB of RAM or more like people are talking about having
(53:59):
like machines that have 10 terabytes of RAM just like
crazy, crazy machines in order to get that throughput.
But with Monad we're getting that throughput while using
commodity hardware, like a server that costs $1000 a year
to run. OK, OK.
And then so can you just contrast this with Solana today?
(54:22):
How, how does it compare? I think Solana is processing
somewhere between 2 and 3000 transactions per second of of
real non voting transactions. And you know whereas monad is
supporting over 10,000 transactions per second.
(54:43):
So it's like, you know, 3 to 5X the throughput right now.
And then also Solana is using relatively high hardware
requirements. I believe the requirement is 256
gigs of RAM right now, whereas monad requires nodes to have 32
gigs of RAM. How do you imagine this is going
(55:06):
to evolve in the future? Do you think you can scale that
a lot more or is this sort of are you hitting the limits here
with these improvements? I think there's the capacity for
another like two to 5X of of throughput improvements.
(55:26):
The real constraint is the on the networking side.
So monad wants nodes to have 100megabit bandwidth and you know,
the throughput is up to a certain point is sort of linear
on the consensus side with the amount of bandwidth that's
(55:49):
allocated. So, but it's like not we, we
don't think it's reasonable for a decentralized network to
require Gigabit bandwidth, for example.
Like there's just sort of like aphysical limitation to the
overall throughput that's imposed by the networking.
(56:10):
OK, OK. So basically bandwidth becomes
like the main bottleneck. And then that's something where
OK, if you if you go up with bandwidth requirements, it just
means that you have to start compromising decentralisation to
some extent or to another extent.
It just becomes harder to run about this.
I mean, I think in Solana Solana's case, right that that
(56:33):
is a real challenge. I mean, we've had various times
in the past, you know, data centres would basically say,
hey, you're getting, you know, it's getting DDoS because it's
getting so much traffic. So I think finding like data
centers that support like SolanaValley is not it's not so easy.
Right. Yeah, I think that, well, I
(56:55):
think the the more the most important thing to focus on is
like the fact that there is likefor Solana or Monad, there is a,
you know, commitment to having afully geographically
decentralized validator set, having hundreds or thousands of
nodes participating in consensus.
(57:18):
Like a lot of the more recent narrative has been around like
just like very, very centralizedsetups where there's a single
sequencer. Like if in that situation,
there's no consensus at all. And so if there's no consensus,
then that whole thing about the bandwidth, like doesn't become a
(57:40):
consideration because the node doesn't have to talk to anyone
else. Like, it literally is just the
one super node that all the requests are flowing to.
And all of the yeah, all of the RPC calls are, are are going to
or or maybe going to like a slave or something, but you
know, something in the same datacenter.
(58:00):
So I think that's the the singlebiggest thing.
And that is ultimately what the constraint will be is the the
networking limitations to support a fully geographically
distributed decentralized network.
But I think it is very important.
And then I think from that pointonward, then Monad would perhaps
(58:22):
pursue a horizontal scaling strategy where there is several
instances of Monad that each arevery computationally dense, like
each over each or each delivering over 10 KTPS of
throughput. And then, you know, when that
10K gets saturated by whatever set of apps that are there, then
maybe there's like another Monadinstance that's also highly
(58:45):
decentralized, has nodes all theway all around the world that
has a more specific, like different, different focus as
well. Right.
It becomes sort of a sharding like design.
But I mean fortunately 10 KTPS is is quite a bit right.
So it should probably have some time.
Yeah, you want a Shard at. You want a Shard when you've
(59:09):
reached the limits of what you know the hardware can really
support. Like I think that it doesn't
make sense to Shard. Like if each network could only
support 100 TPS and then you need to have 100 shards in order
to get to 10 KTPS, that's not very good.
But if one Shard itself is 10 KTPS and then you set up 100 to
(59:29):
get to 1,000,000 TPS, that seemsmuch more justifiable to me.
Yeah. And I mean, it sounds like
previously you were sort of referring a bit to like Mega
Eve, right, which is the other project that's trying to super
scale EDM, but doing it with this kind of supercomputer
centralized approach. To me, it seems very obvious
(59:52):
that's like a massive compromiseon on in the end what we're
trying to do here, right? You're having like decentralized
networks. But yeah, I mean, I guess they
do you think, but mega, let's say Mega Eve will at the expense
of decentralization, then probably be able to to process
even much more than Monet or like what are your thoughts on
(01:00:15):
on like Mega Eve as a comparison?
Yeah, I think there's there's a a number of different projects
that are trying to build like high performance L twos taking
advantage of the the fact that there's no consensus overhead
and you know, you can run that one note on a on a really big
(01:00:37):
box. I think another example is is
Ryze L2 is, is also doing this. I think radio is also doing
this. I mean, you could argue the
hyper Liquid itself is also kindof doing this because the nodes
are all in Tokyo, like the node in order to run a node, you kind
(01:00:58):
of have to have it in this one geography, high RAM requirements
on the nodes. So yeah, I think there's a
there's a number of different projects that are all trying to
trade decentralization for performance.
And I mean, if you think about it like just your expectation
should be that there could be more performance if you cut out
(01:01:22):
all of the decentralization aspects and all of the overhead
of consensus and so on. I think Mert actually has a
pretty funny take on this where he talks about how L twos really
should be like a lot higher TPS than Solana because they're all
running single centralized sequencers and you can Add all
(01:01:45):
of their throughputs together. The fact that like right now
Solana has higher throughput than all the L twos combined
when the L twos like have this massive advantage of
centralization is actually pretty crazy.
So yeah, I think that they're different designs.
Like there's, there's definitelysome trade-offs being made, but
the goal of Monad is to offer really high performance while
(01:02:10):
having a very high degree of decentralization as well at the,
at the layer one level. And we, we think that that's
quite important that that's, youknow, that's why we're all here
is to, to help build new technology that helps
decentralization to have a greater impact and decentralized
apps have a greater impact. So I'm curious about the
developer experience here. Is it basically just going to be
(01:02:32):
hey I'm developing just like forEtherium and pretty much the
same but the difference is just faster block times, more
throughput, cheaper transactions?
Or are there differences when itcomes to developer experience as
well? On day one of mainnet the focus
is really just pure EVM equivalents so that any you can
(01:02:56):
take any application built for Aetherium and redeployed on
Monad without any changes. You don't even need to recompile
it. We are thinking about some
additional quality of life improvements for developers.
Those are things like raising the the the bike hood size limit
(01:03:16):
from 24 kilobytes to 48 or maybeeven more than that.
It's things like adding support for new pre compiles for
example. There's a number of different
cryptographic functions that frequently come up that
currently have to get implemented in Solidity and are
(01:03:37):
very expensive to do so. But if there's a native
implementation, then that shouldjust kind of be baked into the
the node code and you should just be able to call it with a
pre compile. So we're, we're working on some
of those and they will either beincluded in mono mainnet or if
not right at mainnet, then at some point a little bit later
(01:03:57):
on. We're also looking at the
account abstraction space quite a bit and thinking about, well,
first of all, following along with the existing Eips that are
contemplating different ways to ultimately get to native account
abstraction. I think EIP 77 O2 is getting a
(01:04:18):
lot of traction. So this is the EIP that would
sort of allow any EOA to effectively become a smart
contract account as well by having a pointer in its code
slot to point to another smart contract, like another address
(01:04:39):
where code lives. So we're we're looking into
that. And again, it'll either be
included in mainnet or probably sometime shortly thereafter.
But yeah, at the end of the day,from a developer experience,
developers can expect a fully EVM equivalent experience so
that they don't have to change anything.
(01:05:01):
But in addition, our team is evaluating ways to make life
easier for developers. That's actually easier to
develop on Mon AD rather than onEtherium.
Cool. I'm curious on like a very
different topic. You know, there's a lot of
competition around at once and you know, building community and
sort of getting mind chair is not easy.
(01:05:22):
I think you guys have done a fantastic job there and it feels
like there's already a lot of like interest in Monad, people
building on monad. What have you guys done in that
front that has had the biggest impact?
Yeah. I think everything is, is very
synergistic in the sense that I think there's, there's a lot of
(01:05:43):
people in the the crypto space that are really passionate about
the ideals of the space and excited about seeing the success
of new, new decentralized apps and excited about trying new
things and giving feedback. And also that spend, you know,
frankly, like spend a lot of time on crypto Twitter everyday
(01:06:05):
and follow along with a lot of the storylines.
I think what our team has done well is just creating a
welcoming home for many people that share these common
interests. And especially during the, you
know, bear market of 2022-2023, just having a creating this
(01:06:26):
welcoming environment where everyone was kind of a little
bit down in the, the dumps from all the negative news and
sentiment and so on around crypto, but still very
passionate. And a lot of people sort of
ended up joining the Monarch community and making a ton of
friends and contributing in different ways.
Which then really has a, a flywheel effect of now we're at
(01:06:52):
the point where artists that arecreating new art involving the,
the Mon animals which were created by members of the
community or like just leaning into all the memes and, and
jokes and fun, then have a massive audience for their art.
Which then is very just encouraging to them and gives
(01:07:13):
them more of a reason to create more art, Which then creates
more enjoyable experiences for everyone else.
Which then you know, it's, it's very, very positive some.
And then, you know, lastly, alsovery positive for builders who
were building a monad, because then they immediately get some
moral support from people who are cheering for them to be
(01:07:35):
successful and trying out the beta versions of their products
and giving the feedback and becoming community members.
And yeah, I just think that there is it, it's really just
about the fact that for various reasons the community is
attracted. A lot of people are very long
term oriented and care a lot about the space and have enjoyed
(01:07:59):
making friends and then getting to go to meetups around the
world to, to meet up with their friends that they've been
hanging out with a lot in personin real life.
One last anecdote that I'll mention is so around Devcon, we
hosted the Monet Madness pitch competition.
This is the second iteration of something that we're hoping will
(01:08:21):
become like very much a fixture of what we do, but it's it was a
competition for 25 teams to present in front of a panel of,
you know, really leading investors in the space.
We had people from Paradigm, Electric Capital, Pantera,
Animoca, IOSG, and then anyway, the those were the judges and
(01:08:47):
then a bunch of other investors were in attendance in the
audience, but we had anyway. So it was like these teams got
to go on a giant stage, pitch their their project, compete for
$500,000 worth of prizes, get the attention of judges so that
(01:09:07):
they could attract money in their next fundraise.
And then also got over 400 people to attend in person, 1000
people attending on the live stream.
And then kind of around this, wehad over 150 people fly from
other countries, mostly in the Asia area, but we had people
traveling from as far away as asGreece and Turkey and so on all
(01:09:30):
the way flying to Thailand just to go attend the community meet
up and spend the week like hanging out with their friends
that they've met online. So yeah, I think in short,
there's just an incredible amount of energy and excitement
in the monad community. And then that has translated
into benefits for builders as well, which then makes the
ecosystem stronger, which then pulls more people and it
(01:09:53):
hopefully pulls a lot of people and that have never but known
anything about crypto in the past as well.
There may be final question. So the what are the timelines
here like? When do you expect mainnet to
launch? We're expecting Mainnet to
launch sometime early next year.No exact date at the very
(01:10:15):
moment, but the team is working really hard on this and I think
in in particular we're expectingto launch the test net
imminently in 2025. Cool.
Well, thank you so much Keoni. That was really great.
I really enjoyed getting into the details here.
And I feel like this is a lot oflike very smart and interesting
(01:10:39):
and reasonable decisions that you guys have made that I think
can can end up in a in a really powerful blockchain.
So thank you so much for coming on.
It was a great pleasure. Yeah, thank you for having me,
Brian. It's really nice chatting with
you.