Episode Transcript
Available transcripts are automatically generated. Complete accuracy is not guaranteed.
(00:00):
Chain Link, the biggest player in the Oracle market, is
naturally like positive to the growth of the ecosystem, but
it's definitely not supporting all the use cases we wanted to
create. Delivering data to blockchain
ecosystems in depth is a very, very broad term.
The most important is to create a system that is scalable and
also sustainable long term. The biggest problem of oracles
(00:23):
at that time of then was the gascost on Etherium and also the
cost of adding a new feed. So we tried to create a model
where both of those challenges are solved.
That's the reason we started Redstone out of the are we
blockchain that storage chain incubation program with the
promise of creating more modularapproach that can be tweaked and
(00:45):
upgraded as the progress in technologies achieved.
Welcome to Episode and a show which talks about the
technologies, process and peopleare driving decentralization and
blockchain revolution. I'm Brian Crane and today I'm
speaking with Marcin who's the Co founder of Redstone.
Redstone is an Oracle project. They've getting a lot of
(01:06):
traction recently. So I'm excited to dive in with
Marcin on Redstone and Oracles in general.
And just before we get Dive in with Marcin, would like to share
a few words about our sponsors this week.
If you're looking to stake your crypto with confidence, look no
(01:26):
further than Course One. More than 150,000 delegators,
including institutions like Bit Go, Pantera Capital and Ledger
Trust Course One with the assets.
They support over 50 block chains and are leaders in
governance or networks like Cosmos, ensuring your stake is
responsibly managed. Thanks to the advanced MEV
research, you can also enjoy thehighest staking rewards.
(01:47):
You can stake directly from yourpreferred wallet, set up a white
label node, restake your assets on Eigenia or Symbiotic, or use
the SDK for multi chain staking in your app.
Learn more at Chorus .1 and start staking today.
This episode is proudly brought to you by Gnosis, a collective
dedicated to advancing a decentralized future.
(02:07):
Nosys leads innovation with Circles, Nosys Pay and Metri,
reshaping open banking and money.
With Hashi and Nosys VPN, they're building a more
resilient, privacy focused Internet.
If you're looking for an L1 to launch your project, Nosys Chain
offers the same development environment as Ethereum with
lower transaction fees. It's supported by over 200,000
(02:31):
ballot errors, making Nosys Chain a reliable and credibly
neutral foundation for your applications.
Gnosis Dow drives Gnosis governance, where every voice
matters. Join the Gnosis community in the
Gnosis Dow forum today. Deploy on the EVM compatible
Gnosis Chain or secure the network with just one GNO and
(02:53):
affordable hardware. Start your decentralization
journey today at gnosis dot IO. Thanks so much for coming on,
Marcin. It's really great to having you
on. Hey everyone, thank you for
having me. I'm excited.
Epicentre is OG and I'm super happy to be over here.
Yeah, absolutely. So let's start at the beginning.
How did you get into crypto all?Right, it's an interesting 1.
(03:16):
So 2017 I was writing my bachelor thesis about blockchain
Bitcoin and then I got all into the weeds of you know early ICO
white papers and how the ecosystem was structured.
By then I joined one of the start-ups called Polish
Accelerator of Blockchain Technology.
Because I live in Warsaw, Poland.
(03:36):
I was working on a project to create something that nowadays
resembles chain analysis or elliptic.
So we were working with Polish Poly national one to track
ransomware attacks that asked the victims to send bitcoins to
a specific wallet. And then Polish police at that
time in 20/17/2018 had 0 toolingto track such a a criminal list,
(04:00):
like a crime person. So basically we, I was
coordinating such a product backthen, but the start up went
past, I didn't get three months of salary.
So that was a hard start. Then I went to work in a couple
of start-ups at Google Cloud andin 2020 together with Jacob
Bridihowski, the technical founder of Redstone, we kicked
off Redstone. So that's how I got here.
(04:23):
And so how did you decide to focus on oracles?
What was the what was the reasonto focus on this particular
area? Well, oracles as a cause of this
super interesting in a sense like delivering data to
blockchain ecosystems and Daps is a very, very broad term.
The most important is to create a system that is scalable and
(04:45):
also sustainable long term, especially as the crypto
technologies evolving all the time, like the innovation cycle
is probably, I don't know, three, maybe six months that you
have to keep upgrading and be onthe lookout on, on the most
effective technologies to implement.
And in late 2020, early 21, we acknowledged that Chain Link,
(05:06):
the biggest player on the Oraclemarket is naturally like
positive to the growth of the ecosystem, but it's definitely
not supporting all the use caseswe wanted to create.
For example, when we asked them about some price feeds around
interest rates at the time or something that is more
sophisticated than at the blue chips, the queue was super long.
(05:28):
Like we were waiting couple of months to get something simple.
That made us realize the biggestproblem of oracles at that time
of then was the gas cost on Etherium and also the cost of
adding a new feed. So we tried to create a model
where both of those challenges are solved.
That's the reason we started Redstone out of the are we
(05:48):
blockchain that storage chain incubation program with the
promise of creating more modularapproach that can be tweaked and
upgraded as the progress in technologies achieved.
So whenever there is like something new that can improve
the technology, we don't have torecreate the whole end to end
pipeline like chain it has nowadays.
(06:09):
And so far it's been going pretty good.
Right. So basically with chain link you
saw as the biggest issue was thebasically sort of the time and
flexibility to adding different feeds supporting different types
of data, maybe different chains and the gas cost of oracles.
(06:29):
I mean, we've been running at course money, we've been running
chain link for many years and everything.
It's gotten a lot better than now.
But at the time, I think the amount of gas it was costing was
absolutely bananas. So I definitely can can relate
to this, this challenge back then.
(06:49):
Aside from chain link, how else did the Oracle landscape look
like back then? That's an interesting question.
So 20/17/2018 is naturally the period of IC OS.
So some people just gathered quite interesting amount of
money to create some projects and protocols that ended up not
working extremely well. Some oracles I remember from
(07:12):
back then are Ted or, or bands, and I think there was also
Oracle eyes or something like this.
Majority of those didn't get traction and there are a couple
of reasons for that. The major one is oracles have a
kind of like economics of scale.So the more protocols are using
you, it's easier to get the new ones to join the network and
share the cost. So when you have a such a
(07:34):
dominant players chain link backthen and also the market in 2020
and 21 was a bit smaller. It was expanding, but it was
still like fairly niche then there were there was not enough
probably space for many oracles that were fairly similar to each
other. I would say many of them were
just trying to copycat chain link and then try to do it like
cheaper, right. So not much of an innovation.
(07:58):
So in 2021, chain link was only the dominant player.
Right now, I would say it's not a monopoly anymore.
In 2024 and 2025, all the marketlike the top three players right
now are chain link. There's PIF that started in
Solana and then went cross chainwith wormhole bridge that is
trying to innovate as well. And Redstone is in their top
(08:19):
three oracles when we talk aboutnumber of clients.
So right now we have over 130 clients and when it comes to
number of chains supported, So right now we support over 70
chains in both push and pull model and with push
specifically. So the chaining one we support
right now over 30 chains alreadymore than chaining.
(08:40):
So the ecosystem is evolving andI would say S crypto right now
is truly like getting proper acceleration.
This nimbility of an Oracle is going to be super crucial.
So let's talk a little bit about, you know, how does
Redstone work? Like what's the technical
(09:00):
architecture and what does the solution look like?
OK. That's a good question.
We created 4 modules in the Oracle and three out of four
modules can be used on all the networks we expand to.
So we don't have to redeploy thewhole node architecture on every
single chain we try to launch atas chain link does.
(09:22):
That's the reason for them. A cost for going to a new
ecosystem is very high and they need to create enormous
contracts with some of those chains like we've heard stories
paying not even millions, but dozens of millions of dollars to
to get that with Redstone. Their modules are the data
sourcing modules. So we pull from right now over
(09:44):
180 sources that data node operators module where data
providers run notes sourcing from those sources the the data
and then aggregate that data aggregation module which we call
data distribution layer, which is an off chain component that
you can think of as a decentralized cache.
(10:06):
So anyone can join it. There is no like token based
consensus, but it's like an openarchitecture that everyone can
join with a gateway node and pass signed data packages from
those data providers. So it's available for anyone.
And early in the day, the biggest challenge was to create
a solution over here that is protected against DDoS.
(10:27):
So we did extensive work to makesure that crash tests are held
and no one can actually spam thenetwork.
So those signed data packages goto the third module which is
this data distribution layer. And from there it's transparent,
it's public, anyone can pick up a signed data package is
delivered to the on chain environment with data delivery
(10:49):
layer. And for EVM networks is a
standard package which we call EVM Connector.
So in the pool model that we have, every single EVM chain can
use Redstone almost right away as long as it's EVM compatible.
In fact, because some of the networks claim they are and then
the Nitty Gritty's turn out thatit's not necessarily the case.
(11:10):
And with the push model, we create data delivery service, so
like a pushing infrastructure toevery single ecosystem that
would like to also have the chain incompatible interface.
And here for example, we can give kudos to chain link for
standardizing the interface withchaining aggregator V free that
people just like and they use. And we also adapted over here in
the push model itself. When we go to non EVM
(11:34):
ecosystems, all the free components they started with.
So the sourcing node operators and data aggregation can be
reused. So all of that is kept off
chain. So it can be utilized on any
blockchain that we go to and thedata delivery layer, this EVM
connector can be adjusted to TONconnector.
For example, we were the first Oracle to launch on TON network
(11:55):
start, net connector, fuel connector, toy connector, Solana
connector and so on and so forth.
So the piece that we have to adjust for a new ecosystem is
way smaller than in comparison to chain link to redeploy the
whole architecture. OK, so let let's go into this a
little bit in detail. So you say first of all the data
(12:16):
sourcing module, how does that work?
So each. Data provider operating a
Redstone node can choose one of the two paths, either adopt A
library that we prepared and utilize all the data sources
that we identified such as on chain sources in example Curve,
(12:37):
Uniswap Balancer and so on. So many of the decentralized
exchanges and this is super important for yield bearing
assets, for example LSTSLRTS, etina and other youth bearing
collateral because those assets are not traded on centralized
exchanges, they are usually traded on decentralized
exchanges. So those sourcing is really
(12:58):
important. Historically, both chaining air
PIF struggled a lot to call directly the on chain source to
create a a price fee. Second types of price of source
are the centralized exchanges, so regular Binance, Coinbase,
buy bid, OK X and so on. And the third type are the
aggregators such as Kaiko, Congeco, con, market cap and so
(13:23):
on that aggregate data from manyof the other sources.
The second option for the data provider node, so the node
operator is to implement their own sourcing module.
So for example, one of the operators of a data node is
Kaiko or Auras or fairly soon Alchemy as well.
Those can implement their own pricing methodology if they have
(13:47):
so. And the reason we allow that is
we have not only redundancy on the level of the nodes that we
have many nodes, but you have also redundancy in the
methodology. So if our methodology that we
share with the data providers for any reason has a glitch, all
of them will follow that glitch.But the guys that come with
their own sourcing methodology, there is a high chance are not
(14:09):
going to replicate that. And this is one of the problems
with node operators of chain link because all of them are
following glitch with the same methodology.
So if there's a problem with themethodology, even though there
is redundancy in the number of notes, the outcome is going to
be skewed because there's no redundancy in that form.
OK. And then how do you, I mean, I
(14:30):
imagine there can also be some issue with people just
implementing their own methodology because I don't
know, maybe this flawed or maybethere's some sort of conflict of
interest or like is there some how do you kind of manage with
people? And then also, what's the
incentive for someone to, because I imagine it's quite a
(14:50):
bit of work to develop your own methodology here.
Well, the entities that come with their own methodology
usually don't do it specificallyfor Redstone.
It's rather they have the methodology either way.
Like Kaiko, for example, is a company that creates a
methodology for pricing assets anyway and delivered that to
many of the traditional markets.We also work with market makers
(15:15):
that have all the data about people trading on the virus
venues. So it's usually more like if a
data provider comes in and they have already the methodology,
they're comfortable utilizing that to add redundancy to the
flow. The incentivization right now
with early data providers, we give out grants and the future
(15:36):
Reds and talkers so that they operate the node itself in a,
let's say, redundant way. Since the beginning of the
network, we have had 0 downtime issues or even like price
queuing issues because we run autonomous checker that verifies
if the data delivered is a lot outside of the boundary from
(15:59):
other providers. So if there is a skewness, if
for example, BTC price, the checker is going to penalize
that data provider once the Redstone token is launched, that
should happen soon. We are preparing for that,
preparing the new version of white paper and all the
necessary steps to further decentralize the network.
(16:20):
And there's going to be a staking contract where all the
data providers will need to stake Redstone tokens.
And the moment they are, they have either downtime or huge
negligence in terms of the accuracy of the data.
They will be automatically slashed.
And also there's going to be a module for people to vote if the
(16:40):
price feed was outside of the reasonable boundary but still
not caught by the checker. So imagine there's Bitcoin USD
price feed and it traded $100,000.
Is a reporting of $98,000 a wrong price?
Hard to tell depending on the asset and depending on the
markets that are utilizing the price.
(17:00):
So for each single asset from the blue chip ones, we have
specific boundary within which it's still deemed acceptable
price. But if anyone thinks that within
this boundary the acceptable price was actually incorrect can
raise a dispute. I guess a specific data provider
that hey, this was actually fraudulent price, it shouldn't
(17:20):
be accepted as the correct one. And then token holders can vote
whether that was true or not. OK, OK, so that's the, that's
the data sourcing module. And then the second module he
said there was some something about the node.
Node operator. So the thing I explained to you
(17:43):
were the two both of the modules.
So data sourcing you can either use red stones or your own as
you wish. And data providers module is the
aggregate from those sources thedata and then deliver to the
data aggregation module which isdata distribution layer.
And if they deliver this, the package they deliver is skewed.
(18:04):
As mentioned, they're going to get slashed the moment they have
Redstone talk and used as the collateral.
OK, so the data aggregation module then basically says, OK,
we have some kind of algorithm to say, OK, we are going to take
the median or some kind of statistical thing and remove
outliers and try to and this is happening on chain or off chain.
(18:28):
Off chain like the part that I explained over here is happening
off chain but we are also introducing right now and
alternative module the restakingone on top of restaking
protocols like Eigen layer and symbiotic.
We have the tested on Eigen layer running right now and we
are also exploring symbiotic forthat to allow this aggregation
(18:49):
to also happen on the restaking notes so that the attesters a
test that the data was deliveredand signed and it's within this
boundary. So this this flow that I
explained right now can also happen on the risk taking flow
itself. OK.
So, so then it will basically bethat the this ADS you'd
(19:10):
basically have some kind of checks happening on there to
verify the data that that's thendistributed on any only after
any of the rest on chains are supported or is it specifically
for data that's related to restaking?
It will be for the most used assets that people care about
(19:32):
because adding restaking just for restaking, let's call it,
doesn't bring much value like restaking costs.
This is something that people usually don't mention.
But running a restaking flow also requires to give out the
incentives and making sure the infrastructure is running.
So there is a cost associated. So there should be also like a
(19:52):
gain in terms of like a businessand go to market and we are not
going to run this module for thecoins that people don't care
about like long tail. Most likely it's going to be for
blue chips such as Eve or Bitcoin.
On the networks. People have a lot of TBL at
stake so that they are sure thatthe re staking can also scale up
(20:12):
with the amount of TBL that we are protecting for example, on
base, on arbitrary or even need to remain.
OK, OK. So we, we talked about, so first
of all, we have to data sourcing, right, where there's
like a lot of different places where people pull the data.
They can either use your libraries or they can have some
sort of own methodology. And then so node operator
(20:37):
module, I mean, in entertainmentcase, right?
They they basically, you know, choose operators for the
different feats and then, you know, you have a specific amount
of, you know, operators that will report the data for this
particular feed for some particular network.
(21:00):
Do you have a similar model there or is so you guys choose
basically a bunch of operators to do this?
Or is it like in a more open system where kind of anyone can
come in? It's it's not open in the sense
that everyone can come in because creating a truly open
system for data sourcing in Oracles, in my opinion, is
(21:23):
impossible. The reason is because then you
can sophisticatedly create like an attack on an Oracle.
So there should be a sort of andauthority or reputation system.
And in our case right now we arewhitelisting the data providers.
So you have to apply as a data provider with Redstone.
Then we review the application, we put you on staging for at
(21:44):
least two months and we analyse the the quality of the data that
you deliver. So becoming a data provider is
not like out of like on the moment, it takes time, but we
are onboarding like new data providers regularly to make sure
that the system keeps growing. And one thing that you said that
is super important over here is that all the node operators have
(22:04):
to choose which network they support.
So again, you can see the metrics of the complexity of
chain link is growing. So not only have many node
operators, not only you have many assets they have to
subscribe to, but there's also many networks they have to
support. And we abstracted those away,
especially when it comes to the networks they support.
(22:25):
So they just have to dedicate which assets they support and
run the node. So the networks part is fully
abstracted away because of this data distribution layer, because
they send it not on chain, but to this cache environment that
is off chain. So they don't have to
necessarily subscribe to a specific chain ecosystem.
(22:47):
OK so basically the node operator saying they provide
this data to this off chain module and then the data is
basically delivered to different.
Places exactly so the same data,the same signed packages are
transparent, are visible anyone.The code is open source, so then
I can check it out and then those signed data packages are
(23:09):
delivered to destination blockchain.
And imagine this package is delivered to, let's call it
Tutorial and Avalanche, the samepackage of both of those
networks. The smart contract that is
receiving the package has a number of validated signatures
that are acceptable in the flow itself.
(23:30):
So if anyone tampered with the price package before it got to
the smart contract, the signature is not going to match
and the price package is going to be reverted.
As as long as the signature matches on the destination
chain, then it can be utilized in the context of a smart
contract or a specific tap. So thanks to that, we can
literally tap into the security of the destination chain with
(23:52):
this verification of the signature.
But it means that, for example, the smart contract on Avalanche
or on the human stuff, it has tobasically be updated to say, OK,
this is the signature I accept. So then you're, you're basically
going to update this for each ofthe each of the node operators
(24:16):
to say, OK, this node operator is a part of it.
So that's their, that's their signature.
And now the packets or the the data they sign can be sort of
distributed to all these different chains and and
verified natively there. Exactly.
And on the destination chain if for because we support both
(24:38):
models, the pull Oracle model and the push Oracle model.
Those are the two most popular ones in blockchain right now
ecosystems and in the pool modelwhen it's more like that
specific. For example, Gearbox is using
that right now Delta Prime key and Curvenance.
They can even specify which dataproviders they want to accept
(24:58):
and which one they want to exclude.
So they can specify which of thewhitelisted signatures they want
to still utilize. So imagine they don't like one
of the providers or they just deem this provider can have a
shake up. Imagine back in the days FDX and
the whole let's say the crash. So when someone already smelled
that, hey, there's a chance it can collapse.
(25:20):
I mean, they put a lot of leverage on it and I just don't
want to play that game because it's, it doesn't add much to my
ecosystem. I can just use all the other
ones and not lose much. They would just exclude FTX as
the data provider and utilize all the other ones.
So we want to also give the builders a lot of flexibility.
And in general, our motto with Redstone is by builders for
(25:42):
builders. So we always try to focus on the
engineers because they are the ones that are integrating the
Oracle at the end. Can you explain this pull model
and push model a bit more? Like how do those two work?
Of course. So the push model is the one
that was the most popularized bychain link.
(26:02):
So you take the packet and then you throw it on chain to the on
chain storage. And all the protocols that are
on a given network, let's call it Avalanche can just read that
data and the updates are happening based on the deviation
threshold and the heartbeat. So in example, if USD price feed
(26:22):
on Avalash or Ethereal mainnet can have 0.5% deviation
threshold and 24 hour heartbeat.So by design, the users of that
data accept that the data on chain is going to divert from
the real price on centralized exchanges and all the sources by
(26:43):
at least by maximum in theory 0.5%.
But we also checked the historical performance and chain
link and many times due to the consensus mechanism, it's even
more than half percent. So by design with the push
model, you accept it's going to be inaccurate and then it's just
a question of how inaccurate you're going to go.
(27:03):
And the reason for that is the gas cost.
As you mentioned, each update with the push model requires the
the Oracle to spend gas on the update itself and only two
maintenance. It can go as high as even
thousands of dollars with one update.
When the the network gets crazy with for example, crypto kitties
back in the day or even Eigen layer AirDrop campaign and then
(27:27):
the gas was going very high, then all of those updates are
crazy expensive in the pull model.
As a contrast, it's more like dab first.
So the push model is chain first.
It delivers it to the chain and every dab on the chain can use
it. The pull model is more like a
dab specific, a specific project.
Take Gearbox as an example. Gearbox is using right now
(27:50):
Redstone pull model in a way that they fetch a packet from
this data distribution layer. So off chain environment,
assigned data packet whenever there's a transaction from the
user to use Gearbox. So imagine Brian, you're using
Gearbox and you're assigning a transaction in Metamask or
Ledger, whatever. And then Gearbox quickly with
(28:10):
their front end would fetch assigned data packet from
Redstone, take your transaction to call that out of your
transaction, attach assigned data packet.
Then your transaction is delivering it on chain.
On chain in a smart contract, a signatures are verified whether
it comes from an accepted data provider.
If it is, then the transaction is executed.
(28:32):
So essentially what is happeningover here, you as a user cover
the gas. So there is a small margin we
are adding on top of the gas that you would pay.
So instead of paying, for example, $3 for a transaction,
you're paying 3.02. And we optimize the gas a lot.
So it's pretty marginal for the user and also you're updating
the price feeds with every user interaction because what matters
(28:55):
for the Daps is not like what what's the price on chain.
What matters for the Daps is what is the price when the user
is taking an action or a liquidation is happening.
SO1 user interaction can be you creating a new position as I
explained, but another user can just run a liquidation bots and
the moment it sees a position that is underwater, they can
(29:18):
fetch the sign data package fromRedstone data distribution layer
and liquidate the position together within a single
transaction. So for this Redstone
distribution layer like where you go and you fetch that that
update, is that kind of like a central like basically one calls
(29:39):
like the Redstone API server or like how does that work?
So those are gateways. We as Redstone are running I
believe right now 5 gateways. So you can choose from, you can
run your own gateway. We have people from our
community also running a gateway.
And for redundancy we are also utilizing some anti DDoS
(30:02):
solutions such as Streamer network to ensure those packets
are available for anyone to pickup.
So essentially you are just utilizing a specific gateway to
fetch that packet, whichever youlike.
OK, OK. And so basically the gateway.
Now, if someone were to run thisgateway, then basically by
(30:23):
default if you have a node operator and you sign some
package, then you just sort of send it to all the gateways and
they have some kind of peer-to-peer network exchange to
keep them updated. Exactly.
And imagine you're a protocol and you're afraid that you know
the gateways might blow up at one point or whatever.
You can run your own gateway, oryou can run your own free
(30:44):
gateways. The cost of running 1 is fairly
small because we also optimize to make sure the gateways are
lightweight. So there you can add as many
layers of redundancy as you like.
Yeah, OK. No, that seems like an elegant
solution where basically OK, whenever a protocol someone
interacts with a protocol data sort of supply along at the
(31:07):
current state and then update it.
And then of course you you basically have the latest data
whenever that happens. Exactly.
So in this data distribution layer right now we update the
price feeds every 3 seconds. So instead of getting 24 hour
heartbeat and a half percent deviation threshold, you get 3
(31:30):
second latency. And we are about to upgrade
those gateways to also offer fast gateways, so even faster.
So for people that do not care that much about decentralization
and usually those are like perpetual Dexes and highly
optimized solutions, there is always this trade of a new Brian
know it more than anyone else probably if you go faster and
(31:53):
more accurate then the decentralization usually has to
be traded off the more decentralized UR.
You also have to add some latency component to that.
So we also are right now publishing new gateways that
would allow for sub second pricedelivery so that for perp Texas
it's not too old. I see there's obviously
(32:19):
advantages to this pull model. What what are the biggest
trade-offs between the push to pull model?
What's what are some of the advantages of the push model,
for example? I would say the biggest
advantage of the push model is that it's already widely known,
so chain link did a good job. As mentioned with Chain link V3
interface so people are just familiar with that.
(32:41):
With the pull model you have to do some code updates to make
sure your smart contracts are capable of receiving this signed
packet, extracting the data and checking the signatures.
So there is some need for updating the smart contract.
So you what do we see? There is a very, I would say
(33:04):
common path that the old prod older protocols, so forks of
other V3 compound with two or any protocol that was designed
in 20/20/22. They usually prefer the push
model because they're already audited, they don't want to
change the code and you know want to be very much aligned
along the older infrastructure. But all the new protocols they
(33:24):
try to optimize with the pool model because they still have
some plasticity in the design that they are creating and one
big beauty that people don't recognize in the pool model.
Once you integrate pool model from Redstone, you can utilize
on any EVM network. So for example, if you're a
protocol that cares a lot about cross chain availability and you
(33:45):
want to deploy to 10 or 50 networks quickly, once you
integrate it on one EVM chain, you can go to every almost EVM
chain as long as the EVM compatibility is called end to
end. Yeah, Yeah.
No, I can see that. So there's basically a little
bit more of like development overhead in the pool model, but
then you have, you know, you have the advantages that you
(34:09):
have faster updates. It's it's kind of cheaper and
the cost is borne by the actual users who are calling these
contracts. And it costs sort of easily
scales them to, you know, just wherever you deploy the smart
contract, any EVM chain, it willsort of just support it
(34:29):
out-of-the-box. Exactly.
And maybe one important aspect is all the price feeds that we
configured in a sense of like purse are available for anyone
in the pool model. So with the push model on many
of the networks only like 5 or 10 price feeds are supported.
You can go to app to the rest ofthe finance and check out the
(34:51):
push model portal. So you can see on some networks
we push 20 pairs, but on some wepush only three pairs, right?
Because it's very much dependenton who is going to pay for the
gas of those updates. In the pool model, the gas is
not a problem anymore because the user is incentivized to pay
for this additional payloads to execute the transaction or a
(35:14):
liquidator. Thanks to that, you can utilize
right now over 200 price fees that are in the production mode
and we have over 1200 assets in the demo mode.
And whenever you need to bring from demo to production, you can
just ask our team to bring in itin the pool model.
It's fairly simple process. OK.
OK. I'm curious, what is the
(35:38):
business model for Redstone? What is the business model for
chain link? So this is the question.
They are the leader. No, I mean, this is a good
question. I would say for the Oracle as
the category, right? Because.
Yeah, yeah. The moment you push data on
chain, it's available for anyone.
So you have no way to let's say monetize on that because anyone
(36:03):
can just query it because it's in the on chain storage in a
public letter on Etherium, Avalanche, arbitrary and so on.
So how we monitor how we create a business model.
In the push model, we usually ask the ecosystem to cover the
development cost and the gas of pushing data on chain.
And with new ecosystems, they are very willing to cooperate
(36:25):
with us because we do it, I would say in a reasonable scale
in contrast to to the leader on the market.
Whenever you're a protocol that issue and you coin imagine E
Tina, Etherfirenzo and you wouldlike your coin to be integrated
into defy unit Oracle for that. And the process of chain link
usually takes weeks if not months and it's also fairly
(36:47):
expensive. That's the reason we were the
first Oracle to create LRT pricefeeds, for example, Etherfirenzo
puffer kelp. Then we were the first Oracle to
create a price sheet for E Tina,USDE and SUSDE.
We're the first Oracle to createprice feed for Symbiotic PZE
from Renzo as well on the vault operator.
(37:09):
And now we are the 1st and stillonly Oracle to create LBTC from
Lombard. It's a Bitcoin liquid staking
protocol and right now only Redstone offers that price feed.
So for that development work also with those protocols we are
discussing like contracts and last but not least their
protocols that are utilizing thepool model.
(37:30):
Usually we add like a small margin to some of the protocols
when they are calling our feeds.Right now it's marginal.
For some it's even like turned off.
The reason is we want to scale. We are still in the growth
phase. But in the future you can also
add a small margin on top of thegas feed the gas price that the
user has to pay. Can you explain like the
(37:51):
Redstone token? Because like would those extra
fees then end up in some sort oftreasury that's like on each
chain that's kind of controlled by the restaurant token or like
how do you, how will that be managed?
That's a really good question, especially that we are getting
closer and closer to decentralize, decentralizing the
(38:13):
Redstone ecosystem further. The node operators will be
getting a portion of the fees collected by the pool model.
So essentially it's going to go to the smart contract that later
is going to distribute those fees to the node operators.
And very interestingly, when we were designing this
architecture, restaking was not present.
(38:34):
So we were designing that in 2022 early 23 and then Eigen
layer came into place and symbiotic as well.
So right now we are considering also utilizing this restaking
flow so that people can restake red as the token itself but also
other LSTS like EVE and the major LSTS on the restaking
(38:57):
infrastructure. But essentially all the fees
collected should be distributed to the operators.
Themselves, OK, So the fees get distributed to the operators, so
and then the operators have to stay red.
Correct, they have to stay red to make sure they are eligible
for delivering the price feeds and if they have downtime or
(39:20):
risk misreport they get slashed.OK, OK.
And, and so I, I suppose one mechanism here is that, well, if
there's a lot of revenues that node operators can generate,
then it sort of makes them more willing to maybe buy.
So they would have to acquire red tokens and to stake them.
(39:44):
Exactly. So in the early days, naturally
we'll be giving out like some incentivization programs to make
sure the early adopters are there like you know the Nova
operators. But in the long term, they will
be also able to acquire the token on, you know, the markets
to get into the system and also increase the portion of the
(40:05):
stake that they're going to get,their fees that they're going to
get. OK.
Do you, is there also going to be, let's say someone has red
tokens and they let's say correspond run some as an old
operator then OK, one thing is we of course one itself could
(40:26):
like stake some red tokens. Do you also have some system
where people can delegate and stake with different operators
or Yes? This is right now being like
finalized whether we are going to utilize fully the restaking
contracts for that, so the restaking flow where people can
(40:46):
easily like you know restake a specific operators or we are
going to create our own infrastructure to do so.
But there will be a capability for regular users to also
delegate to a specific operators.
Would they then earn some portion of the fees that operate
exactly OK? What else is there to?
(41:07):
Are there any other functions for the token?
Well, as we were progressing with Redstone, naturally new
functions can be added, but the major 1 is going to make sure
that the data providers are reporting correct data because
this is the essence of an Oracleand also this delegation.
So for now, those are the major utility aspects that are planned
(41:30):
for the token itself. I'm curious, you mentioned like
the biggest player today being, you know, Chain Link, PF and you
guys. So I think for chain Link we
kind of talked a little bit about like what their approach
is and how it differs from Redstone.
What about PIF? How?
How do you see PIF and how do you see how they compare with
(41:53):
Redstone? This is an interesting 1.
So PIF started in the Solana ecosystem.
So essentially what they did, they forked Solana and called it
PIF NET and then they ask the data providers that they call
publishers to publish the data feeds to Pifnet.
And from Pifnet Wormhole is utilized as the bridge to
(42:15):
deliver the data cross chain. And while this approach can work
pretty nice on Solana itself, webelieve there are a couple of
drawbacks in the cross chain expansion, one important one
being the gas cost. So with every packet that they
deliver cross chain, they also have to verify signatures of
wormhole, which is a fairly expensive.
(42:38):
For example, in Ethereum mainnet, I I don't know exact
numbers, but it's orders of magnitudes higher than Redstone.
So for each network that the gasis not so optimized.
So Ethereum, Bitcoin, L2 S, manyof them have no have no
optimized gas or other networks with high demand for the block
space and the possibility for the gas to go up.
(42:59):
This is a big problem like Solana itself is fairly cheap
when in terms in terms of gas. So it's it's not such an issue,
but for many networks it is. Second thing is they cannot
source data from on chain sources.
So for example, when there is a new yield bring asset, usually
the majority of liquidity is on curve, balancer, Velo drop, unit
(43:21):
swap and so on. And we have all of those sources
plucked already in the data sourcing module.
So if we want to create a price for them, we can do it within
hours like very quickly. But for PIF, they need the
publishers to deliver that data to Pifnet And those publishers
are usually either centralized exchanges or market makers or
(43:41):
bigger players that care about centralized exchange trading
volume, not the liquidity on chain rather than trading
volume. So this struggle to create such
price feeds in a timely manner. First aspect that we differ is
that PIF don't support, doesn't support the push model.
We as Redstone started with the pull model, but then we quickly
(44:04):
realized that for the OG protocols, the push model
matters a lot because they don'twant to interfere in the smart
contracts that are already well audited.
That's the reason we made a strategic decision to also offer
both the push and the pull model.
So whenever there is a new network, imagine for example,
Unichain, we are a launch partner, over there.
Inc, we are a launch partner andmany others, they're coming like
(44:26):
Vera, Chain, Mona and so on. They think about their
ecosystem, OK, I want to have both push and pull model for the
protocols building on top of me.I can go to chain link for the
push model, I can go to PIF to for the pull model or I can go
to Redstone and get both of them.
So I would say the value proposition for all the new
ecosystem is fairly visible whenit comes to choosing Redstone
(44:48):
South. And last but not least, I would
say the mechanics of PIF is theyare traders.
So they are X quant from Jump crypto like the big market maker
in the world. So the origin of the PIF as a
company is that they started as an incubated entity from Jump.
(45:08):
And that's the reason the major characteristics of people in the
organization are traders, not developers.
Whereas with Redstone right now,75% of our team members are
engineers. Jacob, who used to be smart
contract auditor with Open Zeppelin in the past and has
been in the Etherium ecosystem since 2016, makes sure that all
(45:31):
the engineers are coming to Redstone, also very seasoned
ones. So the average number of years
of experience right now, I thinkfor an engineer is 13 right now
at Redstone. So we make sure that the pipes,
let's say the infrastructure is very, very much solid.
OK, OK. But then the PIF model is also
this pool model. And then you basically say, hey,
(45:52):
I'm going to call some contract on some chain and then I
basically receive the feed by a wormhole.
And that also works in the same kind of latency as with the
Redstone model or. I I'm not sure what's the
latency now. I believe it depends on the
(46:14):
network. With PIF with some networks they
have seconds, with some they cango ready sub second as far as I
know. So it's similar to Redstone
approach and it says we are going to give a decision to be
integrated integrating protocol,whether they want to have more
distributed gateways but higher latency or more centralized
(46:35):
gateways with lower latency withPIF.
The issue is they all have to pull from the Pifnet and Pifnet
is governed by PIF. Like there's no token that is,
you know, like governing the Pifnet, like the PIF token
itself is used for I think staking or some other
capabilities, but not for the Pifnet itself.
Like Pifnet is a centralized blockchain that they're running
(46:56):
for the operators itself. So there is no, let's say,
choosing. You always have to pull from
that one source. OK, one question, you guys
decided to build an ABSI think Eigen layer secured ABS.
Why did you guys decide to go down that route and what were
(47:17):
maybe some of the alternative designs you considered?
Well, when we started designing Reston 21 and then the final
main, it was launched in January2023.
Up until then, restaking wasn't a big thing.
It wasn't a hot topic, right? Like I would say end of 23 and
(47:38):
beginning of 24 was when restaking started to boom.
But as mentioned, we made sure that the architecture we have is
modular and we can implement newsolutions as they appear on the
market. I met Shiram for the first time,
I believe end of 23 at Defconnect in Istanbul as far as
I remember correctly and over then I already knew that the
(48:01):
risk taking game is going to be important for the systems that
care about decentralization and crypto economic security at
large. So we decided to create an
alternative module to all the flow that we I presented where
risk taking is utilized to secure the most important price
feeds over there. We already created the test that
(48:24):
with the help of authentic whichis an infrastructure provider on
top of Eigen layer to make sure that all the modules that you
create in the risk taking flow are easier to implement.
And we are also experimenting with symbiotic to check out like
how their infrastructure is operating.
So it's still not finally decided like which path we are
(48:47):
going to go. But the reason we do that is we
believe at large, especially with those blue chip assets for
big protocols, they are going tocare about the crypt economic
security of how much value some would have to have to skew the
price feed and create an attack that can end up being a
profitable 1. So we were very much aligned
(49:08):
with the restaking flow and that's the reason we started to
build that direction as well. So one topic that I think I've
gotten like some attention in the last years or maybe more
recent, is the topic of Oracle Extractable Value.
Can you explain what that is andwhat the significance is?
(49:33):
So Oracle extractable value is avalue that appears in the block
chain ecosystem when an Oracle update is delivered and causes a
liquidation. Say it can be on the landing
market on CDP, stable coin, perps protocol and others.
The way it works is whenever theupdate is delivered on chain in
(49:53):
the push model and it's available for anyone, the
liquidators bribes the Nev searchers to make sure their
liquidation is included in the next block and that they are the
party that's going to liquidate a specific position.
Let's take for example Venus Protocol, the biggest landing
protocol on B&B chain that we were very close with.
(50:16):
They decided to work with Redstone to implement a solution
where we create a very flush auction before the price feed is
delivered. So let me explain that.
Imagine there is BNBUSD price feed which is the most important
one on BNB network and and next Liquid next price update on
(50:37):
chain in the push model is goingto cause the liquidation.
We partnered with Flash Lane that created a protocol called
Atlas for very quick auction when the data is being delivered
to the next block before it happens to be delivered.
On the blog itself there is a 500 millisecond auction created
(50:58):
for liquidators to bid for the price feed to get a privilege
access to this price feed and liquidate the transaction.
So essentially what is happening, instead of paying for
example 5% liquidation bonus to the liquidators that ended up
being eaten by MEV bots, this liquidator is paying an example
(51:20):
2% of the position of the value of the position to get the
priority pass for liquidating ofthis asset, very much in the
same block that the price it is updated.
And then this 2% is being redistributed back to the
protocol itself, So less value is leaking outside of the
system. And the beauty of the Redstone
(51:40):
implementation, there are three major ones.
The first one, it doesn't add additional delay, so the auction
lasts for half a second, so 500 milliseconds, which is
negligible even for like fast networks.
Maybe on Mega Eve it's going to be a bit different because Mega
Eve tries to be like 10 seconds,10 milliseconds block size.
(52:01):
So we'll see how it's going to play out over there.
But for majority of the networks, it's negligible.
The second thing is the protocoldoesn't have to do any code
changes. So we learned it very much in
our own by our own experience that people don't like to change
smart contracts once they're delivered and once they are
audited, which we also appreciate because it tampers
(52:22):
with the potential security of the ecosystem.
So we created a flow that is interchangeable with chain link
interface, so we can use just Redstone and then tap into OEV
right away. And the third benefit of this
solution is that if for whateverreason the auction within this
Atlas protocol fails, so there'sa glitch or for whatever reason
(52:43):
the auction didn't pass through or there was no bidders, then
the price fee is updated as regularly and their regular
liquidation flow can happen. So the worst case scenario what
can happen is the actual outcomeof the system and it's running
right now. This is right already in
production on Venus protocol andcouple of other protocols that
(53:05):
we cannot announce yet, but they're implementing that and
the results are very, very satisfactionary with about 90%
of OEB opportunities being. Captured OK OK OK so is is there
Oracle extractable value? Does it specifically apply to
liquidations or there are the other cases where this also
(53:26):
comes into play? There will be some other moments
where OEV can play a matter, butright now the biggest
opportunities in liquidations themselves.
So whenever there's liquidation,there's always this liquidation
bonus. So to make sure the liquidators
are incentivized to, you know, actually take the position and
sell it on the market and repay the debt in the landing or CDP
(53:49):
protocol. And right now the biggest
opportunity is there we run analysis with Venus.
And if they had implemented Redstone OEV in the beginning of
Venus at launch, they would havecaptured approximately $100
million by today, which is like a pretty sizeable amount of
(54:10):
money to put it mildly. That's a lot.
Yeah, yeah, yeah, for sure. I'm curious, what kind of
synergies or effect do you see between oracles and more
institutional use cases of crypto?
(54:31):
It's a brilliant question for 2025, especially that Trump is
going to be, you know, officially put into the White
House on the 20th of January, asfar as I know.
It's going to spark a lot of excitement within the
institutions that want to try out blockchain ecosystems.
(54:52):
So the inflow of new players in the market that are not so
sophisticated with blockchain technology is going to be pretty
large. And we as Redstone strategically
next year are going to create a division to onboard those
institutions. So there is going to be needed a
lot of educational work to tell what makes sense, what doesn't,
and where blockchain technology can actually bring value.
(55:15):
What I envision is going to happen are free things.
One, we are going to see way more traditional companies
launching their own L2 or a network similar to Sony launched
Sony this year. And OP stack being so standard
right now on the ecosystem that more and more companies are
going to launch their own L twos.
(55:37):
I I really would like to see theBahamots like Uber or Airbnb
launching their own chain and then like settling transactions
on its own ecosystem itself and then also utilizing the token to
incentivize the users to keep using their platform instead of
competitors that can be huge at large.
And we as an Oracle want to support those kind of use cases.
(55:59):
The second area are tokenizationof assets.
So right now we are cooperating with one of the largest players
and tokenization of assets in the US.
We are supporting them in the cross Gen. expansion and
utilization of tokenized fund inthe D5 protocols themselves.
And I believe especially with BlackRock releasing the short
(56:21):
video clip promoting Bitcoin, I'm not sure if you've seen it
because it was released yesterday, it's free minute
video that BlackRock literally promotes Bitcoin as an asset.
I believe it's going to be a signal for a lot of institutions
interested in tokenization to actually explore either pilot or
go full force. And we as an Oracle, as Redstone
(56:42):
are ready to support them because we already learned a lot
with this first client that we onboarded.
And the first thing that is going to happen in Europe, we
are going to have new players following Mika.
Because I wouldn't say Mika is ultimately good for the whole
ecosystem because there are somerules that are not necessarily
(57:02):
so clear or beneficial for the crypto builders.
But the very important aspect isthat gives clarity.
The biggest problem for institutions and corporates in
Europe to engage with crypto so far was there was no clarity,
like there was no, there were norules.
So they really weren't aware what's possible, what's
(57:23):
according to the book and what'sagainst the book.
And right now with Mika, especially after the first
quarter when it's going to be already tested, I believe there
are going to be more and more players engaging with that.
Just looking at yeah, what's ahead for like 20252026?
Like how do you see the whole Oracle landscape evolving?
(57:46):
I believe it will be opening up further.
So one thing I really like aboutchain link is they try to expand
the pie in terms of like new players and institutions.
So they try to educate the banksand and many of the big players
in globally that crypto in general as a technology is that
very interesting implementation and they should engage with
(58:08):
that. So I believe this is positive to
the whole ecosystem. And what I'm not a big fan of
with chaining specifically are some of the tactics about
monopolistic approaches and someof the less elegant place, let's
say on the market. And I get information from many
of the protocols that are cooperating with them that they
are more and more tired with them trying to make sure that
(58:31):
they are the only dominant player.
So in 2025, I expect the market to open up a little further.
There are two reasons for that. One, new players that are going
to come over to the market chaining will have no capacity
to support them because of the design.
As I mentioned, if there's a newchain, it takes them very long
time and a lot of cost to deployover there.
(58:51):
And we are here to support with new assets is the same.
So we are always the first Oracle to support all of those
new assets that people are excited about in D5.
And also there, there is a pretty large legacy in terms of
like the technical, technical implementation chain link that
they will have to repay over time and is going to impact
(59:12):
their pipeline of new integrations with Redstone.
Our bet is there are going to beway more specific specified app
chains, either EVM or non EVM networks.
We want to expand further to potential non EVM ecosystems.
We are launching on SUI soon, probably Solana Aptos and other
(59:33):
move language ecosystems movement as to as well.
It's already confirmed. And as we will be progressing
in, in the next year, I expect chain link PIF and Redstone
become even more like dominant because as mentioned, there are
economics of scale. So a lot of new entrants success
to secure for example, one or two clients or three clients,
(59:55):
but that they realized it's veryhard to go over a barrier of
let's say 10 clients, so. The cost of choosing an early
Oracle provider is pretty high because you don't know whether
they're going to sustain or not.Redstone since January 2023 when
we launched mainnet had zero, I repeat 0 price manipulations or
(01:00:16):
downtimes, which is not the casewith both piff and chain link.
Both of those players had eithersmaller price misreporting.
Like for example, a pretty big one with chain link was wrap
stake if in December 2023 when chaining reported skewed price
by 25%, which is very much like it's a lot for such a big asset.
(01:00:39):
Or when PIF had problem when wormhole was struggling to reach
consensus, I believe that was beginning of 2024 then they
stopped delivering pricing crosschain, right?
Because they rely 100% on their bridge itself.
So if the bridge is down, they stopped delivering it cross
chain and one narrative we are going to nail as Redstone next
(01:01:01):
year. And we are very close with the
ecosystem as BTC 5 S programmability applications on
top of Bitcoin. Bitcoin as an asset has no
ceiling right now. It's over 100K dollars and we
believe it's going to keep growing and expanding.
And we are super close with the Babylon Lombard pump, BTC
Lorenzo Solve and the whole ecosystem of BTC 5.
(01:01:24):
And we'll be supporting their expansion with proof of reserves
as we delivered already for Lombard, but also other use
cases that will allow them to goto new networks and create more
sophisticated use cases based onBitcoin itself.
Oh, fantastic. Anything else you want to touch
on? Well.
(01:01:44):
I am super positive about 2025 to be honest.
When I was entering 24, I was fairly optimistic, but I was 25.
I, I truly believe it's going tobe a year of Redstone.
We have the number of clients and big partnerships we have
lined up. We are also running Redstone
Expedition, which is our programfor engaging the community where
(01:02:05):
you can earn Redstone gems, which is, which are points
within the ecosystem. The team itself, we just had our
Christmas event and all of them but pretty cool.
And your Redstone backpacks and all of them are pretty.
I'm excited about that. And one thing I want to finish
with is the cryptocene is getting more and more polarized.
(01:02:31):
I have a feeling that after elections in the US, people just
wanted to streamline somewhere that they want to be in favor of
one option and against the other.
And you can see it between EVM and non EVM networks, Solana
versus Sui. Recently I saw on Twitter also a
big fight between Ava Morpho where Polygon is involved.
So people will try to polarize more and more.
(01:02:54):
And as Redstone, what we really want to focus at is growing the
pie and making sure especially the new use cases are addressed
because we believe, OK, if someone is attacking you, you
naturally have to respond and like defend yourself.
But we ourselves will not go into very strong polarized ideas
because we get advices from manyof the marketing agencies.
(01:03:16):
Like it doesn't matter what you say, it does have to be
polarized so that you get the attention of people.
But we don't want to play that game.
We are builders and long term wewant to support builders.
And that's the reason our core focus is just to expand the
universe and bringing new value to the ecosystem itself.
And myself, I'm also very proud for the crypto folks that they
(01:03:38):
kept pushing the frontier and right now crypto is in a full
force to go into the bull market.
Absolutely. Well, thanks so much, Marcin,
for coming on. It's really great to learn about
Redstone. It does feel like a very
elegant, yeah, elegant and scalable way.
And I think, I think it's abundantly clear, I think for
everyone, you know, just how crucial oracles are to support
(01:04:01):
like a wide range of use cases. So I'm excited to see how you
know Redstone is going to is going to develop in the next
year and excited to see also howthe token launch will go soon.
Thank you Brian. It was absolute pleasure to join
over here. The token itself is going to be
soon naturally, but we are pretty positive and I'm
(01:04:22):
extremely, extremely excited anddedicated to keep growing
Redstone. I'm working 14 hours a day and
with a smile on the face, I'm, you know, satisfied with the
attraction that we deliver. So thanks a lot for inviting me
and I will keep following Epicenter for sure.
Thanks so much, Mason.