All Episodes

November 24, 2025 33 mins
This conversation delves into the recent outage of the Cardano blockchain, exploring the causes, implications, and community responses. Peter breaks down the technical aspects of the incident, clarifying misconceptions about the nature of the outage, the role of state pool operators, and the recovery process. The discussion also highlights the importance of community collaboration and the challenges posed by media coverage and misinformation surrounding the event.

Takeaways
✅ Cardano experienced a temporary chain partition due to a malformed transaction.
✅ The incident was not a hack; funds were not compromised. 
✅ State pool operators played a crucial role in the recovery process.
✅ The network's self-healing capabilities were demonstrated during the incident.
✅ Media coverage often misrepresents the situation, leading to misinformation.
✅ Community collaboration was key in addressing the outage quickly.
✅ The incident highlighted the importance of robust governance in blockchain ecosystems.
✅ Lessons learned will strengthen the Cardano network moving forward.
✅ The response from the Cardano community was prompt and effective.
✅ Future steps include a thorough retrospective of the incident.

Chapters
00:00 Cardano Blockchain Outage Overview
02:48 Understanding Chain Partitions and Forks
06:07 The Role of State Pool Operators
08:57 Technical Breakdown of the Incident 
12:08 Community Response and Recovery
14:58 Implications for the Cardano Ecosystem 
18:10 Media Coverage and Misinformation
20:55 Lessons Learned and Future Steps 
23:49 Final Thoughts and Community Support

DISCLAIMER: This content is for informational and educational purposes only and is not financial, investment, or legal advice. I am not affiliated with, nor compensated by, the project discussed—no tokens, payments, or incentives received. I do not hold a stake in the project, including private or future allocations. All views are my own, based on public information. Always do your own research and consult a licensed advisor before investing. Crypto investments carry high risk, and past performance is no guarantee of future results. I am not responsible for any decisions you make based on this content.

🔗 https://www.youtube.com/watch?v=Fq8FhvxET2k

Subscribe to the audio podcast:
🔗 https://bit.ly/learncardano-spotify
🔗 https://apple.co/3jEPM8C
🔗 https://learncardano.io/

Follow on Social:
🔗 https://x.com/learncardano
🔗 https://facebook.com/learncardano
Mark as Played
Transcript

Episode Transcript

Available transcripts are automatically generated. Complete accuracy is not guaranteed.
Speaker 1 (00:00):
Over the weekend, we saw the Condona blockchain suffer a
major outage. Downtime was hacked. Something happened here. But I
do have all the details, and I was online when
a lot of this was happening as well. So I'm
going to break down the myths and what actually happened
on the Kadona ecosystem, Who did it, why it happened,

(00:23):
how it happened, and the ramifications of all this. In
this video, I'm Peter at fits your first time here.
I hit that thumbs up light subscribe notification bell. My
other camera is having issues at the moment. It's so
hard at editing with it, so I'm swapping to a
webcam at the moment, so that's the best I can do.
But hopefully it's not about the video. It's all about

(00:43):
the content. And this is where we're at. So this
is the main announcement from Intersect. So for those that
don't know, those people that aren't in the Condona ecosystem,
Intersect is a members based organization that is there to
look after the community. So this is the future of
governance and inter Sect is the founding body to help
along with all of that, and this is their announcement here.

(01:05):
So Kadana experience a temporary chain partition today after a
malephone transaction triggered a bug into underlying software library. Now
this was on the twenty second my time here in Australia,
six am twenty second, so Friday of last week. There
is a big breakdown article here that does talk about it,
but I won't go through the article in it's a

(01:27):
bit lengthy here. If you want to read the details,
I'll put links down below. But essentially we had a
chain partition, so not a chain shut down, not a
chain stoppage. It wasn't hacked. The chain split essentially forked.
But this is a different type of fork. If you've
been in the blockchain space for a long time, you
would have known that Ethereum, for example, forked in the

(01:50):
very early days from etherum as we know now to
a theorem classic. So there's two ethereums and the theorem classic.
And this will happened because of massive hack and a
massive amount of money was lost, so they had to
roll back transactions, invalidate a whole bunch of them and
say that this is now the main chain. We're raising
the history of that particular hat. So it's not that

(02:12):
type of fork. This type of fork actually happens within
the Cadana ecosystem all the time. Every now and then,
every few blocks a fork may happen, and I'll go
into the details of a breakdown in the moment about that.
But this here was a chained partition. That's why it's
worded in that way. The real cause, the malformed delegation
transaction exploited a de serialization code resolution. Spos just need

(02:35):
to upgrade to the latest version of the node to
get past this user impact. No funds have been compromised.
Most wallets required no user action status. The network converging
as majority of nodes upgrade. So okay, So that's the
main article. This is the current version of the node.
If you are that interested, you can go through the code.

(02:56):
You can have a look at yourself. If you're a
stateboard operator, please upgrade to this version of the node. Fortunately,
I myself was on a particular version, a very old version,
probably about a year old, and that was running just fine.
That was pushing through the blocks and we didn't have
to do anything with that. Fortunately, I checked and verified
on that night when I saw the news coming out

(03:18):
about the they could done a chain not producing blocks,
and they said that my particular version, I think it
was ten point one point four was perfectly fine. It
was still going through, and I could see online that
my blocks were still going through, and I could see
here this is a snapshot of one of my relays
that the blocks the last propagation block where it was

(03:39):
twenty seconds ago, which is kind of normal, and the
earl was propagating through the network. I end up okay,
So there was definitely some sort of issue. You can
see here on the tip difference here three hundred and
ive difference from the reference tip. So this is the slots.
This is when the chain's going along, it progresses up
a particular number that the reference there. That's what it

(04:01):
should have been. But it was definitely slowing down, and
we could see that all on chain. Now, if we've
rolled back a little bit a couple of days, on
the twentieth of November, this was spotted on the preview chain,
so this they could see that something happened on the
test net. It was foked into two and only one

(04:23):
block producer was producing blocks. Then that slowed down and
eventually that wasn't producing blocks anymore. So what happened from
this point is that someone saw this particular transaction, or
someone tried this transaction on test net and went, hey,
let's see what will happen if we take this transaction
onto main net. Will it destroy Kadano as we know it?

(04:48):
So unfortunately, this is the person that had done this
particular transaction and this is for I'll play another clip
in a second, and this is a person that did
the transactions Statewoo prior to here rites SBO takes blame
for cyber attack unverified. It has been verified now if true,
next year they can win most Impactful SPO. Definitely had

(05:10):
an impact and this is what Homer J writes his
pool here. His ticket is AAA. Note that I'll talk
a little bit more about that later. Sorry, I know
the word isn't enough given the impact of my actions
Kadana folks, it was me who engaged the network with
my careless action yesterday evening. It started off as a
let's see if I can reproduce that bad transaction personal challenge,

(05:35):
and then I was dumb enough to release it on mainenet.
And there's a lot more to this as well. Now
there's some really weird stuff in here. And I've interacted
with the Homer over the years, way back when I
first started my statepol as well. Homa is one of
those ogs. He's been in the ecosystem for a very
long time. It was a staatepole operator in the incentivized

(05:57):
test net days, so this is before launched on MAInet,
and I have to say every person interacted with that
was around on the Cadona ecosystem in those days really
knows their stuff when it comes to coding, when it
comes to network administration, when it comes to running and
operating on a Linux operating system, they really know their stuff.

(06:22):
And Homer here is one of the ones that actually
contributes to various bits of code that many statepool operators use.
You can pull I'll just pull this up here. So
this here is the topology or the pools that are
operating on the Cadna ecosystem. So what you see here,

(06:43):
let me zoom in. This is the topology setup for
the peer to peer network. That's all the relays out
there or the block producers use, and it sets up
essentially some bootstrapping peers. Every node will connect to Kadana
Foundation amergos and iog's bootstrapping peers. Then you can set

(07:06):
up your local roots. Every staatepool operator will then connect
their relay to their other relays, so we run multiple
relays and a block producer, so this is where it
lets you set that up. And then you have back
up access points here so you could connect to any
of these. And these are the contributors to this bit
of code that was written up to make this all happen.

(07:26):
We have some well known pools here you can see
their tickers PSB under ahl Cleo. But the one here AAA,
this is homeless Pool is a contributor to this. He's
been working.

Speaker 2 (07:38):
You know.

Speaker 1 (07:39):
I've seen posts say and comment on the fact that
you know this kid whoever it is, vibe coded using
AI to hack the Cadana ecosystem, and I just have
to put a comment on that. And I really don't
think Homer would have used AI to have done all this.
Maybe to help with a cleanup code or something, I

(08:01):
don't know, but he wouldn't have just vibe coded his
way into hacking this, especially when he is a contributor
and has worked in the Kadana ecosystem as a staatepoard
operator for over five years at least five, maybe six years.
I don't know the timelines, but he is definitely very

(08:21):
skilled in what he can do within the Cadina ecosystem.
So here to rely on AI instructions on how to
block all traffic in and out of Linux server without
properly testing it first on test first, and then watch
it in horror as the last block time, and then
watch in horror as the last block time on Explorers froze. Okay, Homer, Homer, Homer.

(08:41):
For those that don't know, to block traffic on a
Linux server is simply one command line. You're probably using
UfW like ultra friendly firewall what's the stand for I
can't remember now, uncomplicated firewall. So to get that up
and running is literally one command line. It's not hard.

(09:02):
So it's one of those really basic things that you
first do on every single node that you set up.
You've got to block all the traffic, and just one
command deny or would have done all that. So I
don't buy that comment there that you vibe coded into
any of this stuff here. Anyway. There's a lot of
stuff here. I won't go too deep into it anymore.

(09:25):
I'll let you form your own opinions about this, and
who knows. There may be legal proceedings behind all this
as well. And this is what Charles had to say
about all this. And I will play this clip so
you can listen to it as well. It has a
little bit of extra commentary here too, So let me
just play this clip.

Speaker 2 (09:41):
If this was a normal proof of stake network with
bonding and slashing and checkpoints, you would have catastrophic fun
and loss in the network, and you would also have
to manually reset the network and reset a checkpoint. This
was done in a decentralized way where one group US
issued the patch and then Intersect and many other organizations

(10:04):
helped propagate it. But the network never went down and
remediations rather straightforward or bors survived. The worst possible thing
that can happen to a proof of state protocol and
that the network is relatively unharmed from that. This bug
existed since twenty twenty two, and it was in an
obscure cryptographic library and somebody discovered it on the test

(10:29):
net and they tested it on the test net, breaking
it the minute that happened, so it was a targeted attack, premeditated.
It probably took several hours to figure out how to
do it. Because nobody was talking on the ITN, you
didn't tell anybody who was going to do it, and
then deployed that transaction on main net, knowing full well
it would create a network partition. So it was a

(10:49):
malicious act. When we looked at the transaction, the ATA
that he used to do it came from a retired pool.
We were able forensically to trace that pool back to
his ITN days where he actually used his real life
human identity to register the website associated with his it stakepool.
And then after I broadcasted that information, then he issued

(11:10):
a public apology and said it was an accident, didn't
know what he was doing. But if you look at
his history, he was in the fake for a discord.
His post history is extremely critical of input output and
me in particular. There seemed to be a personal vendetta there.
It's important to point out that Cardino has been running
for twenty four hours a day, seven days a week,
and it did not stop running. The network did not

(11:32):
halt during this incident. There were two chains that became one,
but the original chain is still unbroken and unedited. The
chain is still immutable. And no double spends occurred on
that chain, and no editing of the transaction history occurred
on that chain. So if you came from you took
a week off, and you came back and recinked your node,

(11:53):
you would notice no discrepancies inside the system. So it's
another lie that's being propagated right now. The Twitter space
that credit hit manually. There was a manual intervention. This
was nothing of the sort of a giant proof of
stake system recovering from effectively a network split almost like
a fifty one percent attack, and doing so in less

(12:14):
than twelve hours. It has never happened in the history
of proof of steak that they were able to put
all those pieces back together without centralizing the network.

Speaker 1 (12:24):
Yeah, so other chains where something bad has happened. A
version of software went out that crashed it. I'm sorry
Solana guys, I have to mention it. But a lot
of the times when the Solana chain does go down,
the entire chain needs to be shut down, stopped, nodes upgrade,
and then RECYNCD. At that point, this or Kadano, the

(12:45):
chain did not stop. The chain kept on going, All
the transactions kept on going through while things were being
fixed nodes were upgrading, and then the two forked chains
merged back together. So we had the partitioning, the forking,
and then it merged back together. And I think that's
absolutely amazing on how that happened. And then there's some
more comments here and I'll go into more technical details

(13:07):
about this. So Jane here wrotes, is it safe to
assume that for the five hundred or so transactions that
ultimately were not accepted during the art fork the forking event,
the transaction fees were ultimately refunded, which, if true, is
pretty cool and definitely not what happens on many other chains.
And Nicholas here responds, it is similar. On any chain

(13:30):
that forks, you are charged for what happened on the
fork that survives. It's different than being charged for failing transactions.
These transactions didn't fail, they just did not exist. And
the hosty wrotes, yes, it never happens, advocates, Yep, Jane
gets it now. I don't want to put aside a
lot of people that were actually affected. So the people

(13:50):
that going through and trying to do perpetuals using strike finance,
and they were trying to put their orders through because
they had big opportunities or going to be liquidated, and
you know at that point in time, they would have
wanted to get their orders through. Fortunately, for a lot
of those people, with the price of Ada was essentially
unaffected by any of this. It remained at that zero

(14:11):
point four forty cent mark throughout this entire event, and
now it's going up again. So a lot of these
people that were trying to get orders through and transactions
through probably could have recovered quite easily in a couple
of hours afterwards because the price didn't move that much.
But anyway, from the event, the patches going through, a
lot of people got together made this work fantastic. All

(14:34):
the bad nodes nearly instantly swapped chains, which is absolutely amazing,
and the consensus network protocols worked. We have proven it.
So this is one of the designs within Kadano itself.
Within the network, it's the knackomotor consensus algorithm that is
supposed to self heal itself. If there's forks and stuff,
it will self heal and then the majority of the

(14:55):
chain with the majority of the steak would be the
winner at the end. I thought this is pretty cool.
This is a online meeting between a bunch of operators
and contributors that we're working through to try and fix
all this, watching the graphs live as the bad chain
was up and golded up by the main chain, the
green one here, and I'll pull up this tweet here

(15:17):
from Marcus who actually explains this a little bit better
so we know exactly what's going on here. So here's
a quick update. How the good fork ate up the
bad one or nodes on the red had now been
converted to green. No more delta as there is no
more bad fork. If we zoom in a little bit,
we can see this here. But all of this red
line here is that bad chain as it's going through

(15:39):
minting blocks, but as a convergence is happening, it can
being converted into the green line here and then at
this point if you can just see it there, that's
when the bad chain no longer exists. This purple line here,
this is a divergence of blocks. So let me just
read Marcus's explanation here. These are delta blocks between the
two forks the outer right ye access. This metric initially

(16:03):
grew with more steak producing blocks on the bad chain.
Then sbos patch their nodes and move their steak to
the good chain, at which point the purple line started
decline as more a steak was placed on the good
fork than the bad one. Now here's some more contexts
for you. This is a screenshot that I managed to
capture in the morning after I woke up altually. Like
I said, for me, I didn't have to stay up

(16:26):
all night helping with this or upgrading my pool. My
pool was fortunately on a older version which wasn't affected.
And here we can see the hype battles between the
multi or the different pools out there. It's that three
hundred hype battles here going through and then slowly diverging
down to almost zero. So on a healthy epoch you

(16:49):
do see this go up and down as hype battles
and slot battles happened between different state pools as they
minting the same block. But of course this one was
very different because of that giant forked chain. And here
you can see a little bit more clarity. Here this
is my pool of winning a hight battle between multiple
different pools that are trying to mint at the same time,

(17:10):
and this one of these would have been on the
incorrect chain. Fortunately, mine managed to win this one here
and keep the chain going. So you can see three
of them in a row here having this particular hyph battle.
This one here, This one is between two state pools.
Now this normally you don't see this many hype battles
one after another like this. When you see it something

(17:30):
like this, the chain has definitely foughked and something's going
on here, and you can see each one of these
state pools being paired off, trying to fight over which
block is a correct one and add it to the
chain and add it to the longest chain. If you're
wondering what this all is, that's what is happening under
the hood here. Now there is also this really cool
recording from the from pool tool here going through and

(17:54):
actually watching this all happen live. Unfortunately someone let me
just pause. So fortunately someone recorded this so we could
actually see it. So let me just play a little
bit of this. It goes on for about to almost
three minutes. Let me just play a little bit for it,
a little bit of it, and if you want to
watch it all links down below that saw.

Speaker 3 (18:14):
This is the big chain, the good chain. It's going
to be catching up to the bad chain, right here.
This is just happening in real time. The only one
geek out.

Speaker 1 (18:22):
On this definitely something to geek out on, So I'll
just pass forward a little bit.

Speaker 3 (18:26):
There you go.

Speaker 1 (18:26):
You can see the chain catching up here, and we're
going through and having a look at the various hype
battles that are happening at the moment.

Speaker 3 (18:34):
Three six zero block come in right now. I thinks
LCP won this one because that's all the data has.
But then we'll see the new block come in and
it won't probably won't immediately be shown to be the winner,
but it will be the winner because it's on the
longest chain there comes because it's six blocks. So again,
the hypo stuff is all figured out about ten blocks

(18:54):
past the tip, so we got a little ways to
go before it actually assigns the winner to be Paul.
That's kind of cool.

Speaker 1 (19:00):
Good work. They're poor for upgrading your pool, and which
just fast forward a little bit more for you guys.
But you can see here this big long line here
that that bar is growing in size and progressing along
there as well in those Wow, I've never seen that either.
I've never seen that many pools have a height battle.
So one two three four five. That's incredible. This really

(19:22):
was history in the making. There we go, the chains
caught up.

Speaker 3 (19:25):
You're done with that stupid problem, aside from of course
replaying whatever other contained.

Speaker 1 (19:29):
Now, Yes, that stupid issue is done and contained. Awesome
to see and recorded there in real time and sorted out,
so amazing, amazing stuff. Thank you for recording that as well.
That was brilliant. That was really cool. Okay, so let's
go into some of the details here. So Bury Ales
Ales here, creator of space Buds, was big in the

(19:51):
Cadona ecosysm started off the NFT phase for the chain.
But here let me read out what he writes here.
It's a bit of a long thread. A lot of
this is really good stuff. What is fascinating about yesterday's
event is how Kadanu recovered from a minority chain and
got rid of the symptom while preserving most of the
history in progress since the incident. Let me explain what

(20:13):
I mean by recovering from a minority chain. I don't
know the exact numbers, but I assume thirty to forty
percent ran node a no bug that was my pool
that was part of that. Around sixty to seventy percent
ran Node B that had the bug. Once the malform
transaction was submitted, node A rejected It formed a minority chain,

(20:33):
slower progress because of the lower stake. Node B accepted
it formed majority change, faster progress because of the higher stake.
Now what it means by highering lower stake is that
all the pools in node A collectively had less staked
delegation to it, whereas the node B pools had more
delegation to it. Normally, in a Knakamoto consensus like Kadano,

(20:58):
nodes followed the longest chain, but nodes A didn't switch
to it because it violated their Ledger rules. So for
a while, minority fork A lagged. Then sbos upgrade their
nodes to the correct version. Stake on forked A gradually grew,
which it reached. Once it reached its tipping point of
fifty one percent, fork A could start catching up to B.

(21:20):
That was the whole event here that we saw in
the video recording. Eventually A became the longest chain and
fort B was resolved, Yet most of the history and
progress was preserved and the malform transaction removed. The network
recovered and self healed. Now I'm wondering how b FT
style consensus eg Solana would have performed in such a case. First,

(21:43):
some explanations. In a BFT style consensus, you need at
least sixty seven percent votes to produce a block and
make progress. With less than sixty seven percent of nodes
can't reach an agreement. With that being said, we have
two scenarios. Scenario one forty percent run a Node A
no Bug, sixty percent run Node B bug chain holts.

(22:06):
No progression possible because nodes can't reach an agreement. Sounds bad,
but this is the best. What can happen here? Why
now the malform transaction doesn't make it onto the chain.
A huge coordination needed to restart, which is why it's
not always easy. Scenario two thirty percent run Node A
no Bug, seventy percent run Node B bug. Block is

(22:29):
instantly finalized with the malform transaction. Minority A tries to
reject the block that has no other choice than to halt. Meanwhile,
Node B keeps progressing the chain. So if this scenario
happens with this amount of state delegation to Node B
being higher, I would have had to wake up in
the middle of night and upgrade all of my nodes

(22:51):
to the latest one so that I wouldn't be missing
out on statepool rewards, and my delegates wouldn't have been
missing out on their state rewards. So continuing here, the
symptom is now permanent in the system and there's no
recovery possible. Only solution is to coordinate hard fork from
the symptom, but all progress and history since then is lost.

(23:13):
Financial damage could be huge. No matter how much we
strive for perfect code, humans always make mistakes. What really
matters is how a network responds when things go wrong
and how easily it can recover. And I think that's
where Kadana shines and I have to completely agree here.
We were very lucky here that we had the right

(23:35):
amount of delegation between the different nodes, we had a
diverse amount of node versions running, and there it was
easy enough for us to switch back away from the symptom,
away from the bad chain that had the bug, and
move over to the correct chains. So I thought that
was absolutely amazing. Now, how did the wider crypto community

(23:55):
react to all this. There's a couple of different reactions,
some good and some not so good. But let me
start off with the good ones. This here. This is
from Tolly. He is a co founder of Solana, and
he writes this, I am going out on a limb
and actually saying that this is pretty cool. Knakamotor style
consensus without proof of work is extremely hard to build.

(24:19):
The protocol functioned as designed in the presence of bugs.
That's his comment to bury Al's explanation of how Kadana
recovered itself in this situation. Now, was this with good intent?
I don't know. It seems like Tolly here and Mertz,
another co founder of Solana, have a bit of beef

(24:39):
and it seems like he's doing this despite Mert. So
I don't know what's going on here, but I have
to say the crypto space is very fickled. A lot
of people in the space are arguing on semantics saying
that it did go down, the chain split and forked,
that it did cause downtime, and in a way it
kind of did because Exchange's wallets had to stop processing

(25:01):
various transactions. You could see on coinbase even they had
in their status monitor that Kadano transactions were stored. You know,
there was a lot of people on x saying how
bad it was that they were trying to move the
Aida off coinbase. We will move it to coinbase, and
it just didn't work because of various things had to
be halted. Now, the ramifications of all this were huge

(25:23):
in that there's are now a lot of misinformation and
media coverage of this. Anytime something happens in the Kadana ecosystem,
the media jumps on and starts trashing it, spreading half truth, misinformation,
and generally giving the chain bad PR. Some would argue
any PR is good PR, but in this case I

(25:44):
have to say it's bad PR because of the misinformation
that went through on crypto Twitter. So this is a
post coin Telegraph for those that don't know. Coin Telegraph
is probably the biggest media cryptomedia outlet out there. Kadano
suffered a chain partition after abnormal transaction triggered a code bug.
Reported reporting the incident, one cheeky x user claimed kadana

(26:08):
MAInet is down and nobody has noticed it yet because
nobody uses it. So yes, cheeky ex user did say that,
but also a cheeky response here or cheeky reporting, I
should say, for choosing that particular quote to add into
their post, they could have chosen anything else, maybe something
from intersect, a actual official statement, but no, they decided

(26:29):
to choose the one that would get them the most
level of engagement. So a lot of these posts out
there were community noted. Cadana was never down. Block production continued.
And then linking off to the official incident report, Patrick
Topley here from ndmaker pulled up some interesting data and
this was during the suppose of downtime on chain the

(26:51):
platform en Maker actually still managed to mint quite a
few NFTs of nineteen different projects. Fd market is overall
pretty flat at the moment, so foraging ten isn't that much.
But the fact that these projects still could mint during
the petitioning of the chain meant that you could see
by evidence here from minting, evidence here from minting that

(27:14):
the chain was still going so awesome there to see
those type of stats come through. Now, if you want
a really deep retrospective of this entire incident, Andrew Westberg
did a brilliant video here. I haven't gone through it
at all, but it goes for just under twenty five
minutes where he goes through what actually happened or the
technical details there, or the takeaways at the end as

(27:36):
well from this particular event. So if you want to
go through that, I have encouraged you to watch that video.
Andrew is the CEO of Manetta, the usd M stable
coin on the Kadana ecosystem and also one of the
very first staatepool operators in the ITN stage in that
era from a blue cheese steakhouse, So do check out

(27:58):
his stuff and his statepoo. Well, now just a couple
more posts here for you guys. This is the second
attack on the Kadana ecosystem. So the first one was
that massive spam attack of smart contracts where they cramped
in one hundred and eighty three. I think it was
smart contract executions on chain really tried to push it

(28:19):
as much as possible to try and make it fail,
and that didn't go too well. It became a laughing
stock and became a meme as well. And this second
time slowed down the network that actually helped clear out
a bug quicker than it would have if it wasn't
put into place. So at what point do people realize
that they're helping not attacking. Now this yere was a

(28:41):
video that Child's posted up. I think it was at
three am. His time when this finally all fixed itself up.
But let me just play this clip here for you.

Speaker 2 (28:51):
All day, Jair and I were with Agalos actually because
he was in town for the workshops in meetings with
the Cardinal Foundation Intersect and Amergo, and I personally would
like to thank the Cardinal Foundation Intersect in Amergo for
their professionalism and their attentiveness. Here we all work together,

(29:11):
set our differences aside as one team, and you know what,
everybody was exactly what they needed to be. So I'm
proud of their work, proud of the time and effort
that they put in, and there was prompt and a
sort of communication through and so I'm glad that the
institutions were able to do that whatever has been said
in the past. Hopefully this can lead to a new
chapter of collaboration. We'll see. But it was a good day,

(29:35):
and I'm proud of the people that showed up, like
Marcus and others from the CF to work with us.
So that was a bright area.

Speaker 1 (29:43):
See, I have to say that was a really good
outcome from this particular event. They've always the founding entities
have always been fighting each other for I don't know
what it is really the history goes back quite far,
but it was really good to see that at this
point when it was needed, entities, Intersect, Emergo, IOG, the

(30:03):
Cadano Foundation, and the community everyone came together all at
the same time to fix this issue, to respond to
the mammoth effect, that this malformed transaction had coordinated communication
and got the message out there to everyone to upgrade

(30:24):
to fix the issue as quickly as possible. So that
was absolutely amazing from the coordination that was done. And
like I said, I went to bed knowing that the
issue was there, but knowing that I didn't need to
do anything at that point in time to waking up
early and then finding everything already fixed, so I thought
the response time was absolutely amazing. And then reading through

(30:45):
all the communication and comms afterwards was absolutely stunning. So
this was really good to see from all the founding
entities and everyone in the community. Now this is where
I'll probably leave you, guys, and this is the final
statement from Intersect here about the myths and facts of
what actually happened and what is going around at the moment.
So let me just read this one for you guys.

(31:07):
So if you are listening to the I read this
out because a lot of people listen to the audio
podcast and I get almost as many listens as I
do views on YouTube, So I read this out for
everyone that isn't watching. Okay. Myth one, Kadana went down.
Fact no, while using experience some disruption. This was slowed

(31:27):
down due to a temporary chain split. Caadana continued producing
blocks and maintained its integrity two. Myth two, Kadana was hacked.
Fact the core protocol, consensus, and cryptography were not compromised.
This was an edge case in node implementation. At three.
Nobody uses Kadanu and nobody noticed fact sbo's engineers, exchanges, wallets,

(31:50):
and explorers reacted in real time, which is why the
issue was contained so quickly. Myth four an AI using
teenager brought the network down that a technically skilled individual
crafted and submitted the transaction after it was observed on
the preview test net relevant authorities and law enforcement and

(32:10):
law enforcement are being notified Myth five. So one essentially
rolled back the chain. Fact independent, SBOS exchanges and relay
voluntarily upgraded to the fixed node version, allowing the healthy
chain to outweigh the invalid one through normal roboris consensus,
not centralized control. That is very true. I still haven't

(32:31):
upgraded my nodes and it's still working just fine, so
I wasn't forced to response. Security and next steps. Ecosystem
teams formed a joint incident squad, shipped patch software within hours,
and followed established disaster recovery practices. DARA operates active bug
bounty and responsible disclosure program, which were bypassed in this case,

(32:53):
so the incident is treated as potentially malicious and referred
to authorities. A thorough retrospective will A thorough retrospective will
examine the incident in exhaustive detail, and the recommendations that
spring from that will only make us stronger. Still, then,
thank you Intersect for clarifying that. So there was a
lot to go through there, guys, and if you made

(33:14):
it all the way to the end here, thank you
so much for sticking with me. There's lots of ways
that you can support the channel with the buy me
a coffee links, Patreon memberships down below, and YouTube memberships
as well. And if you can't do any of that,
that's totally fine. You don't need to, you don't have to,
but if you can just hear that thumbs up like
subscribe notification bell on your way out. And if you

(33:34):
got something out of this, if you learn something new,
if you aren't bullish on Kadana already, hopefully you are.
There's a lot to take out of this video and
to know that something as robust as Kadana can survive
this type of attack is absolutely amazing to see. Okay
with that, guys, I'll see in the next video.
Advertise With Us

Popular Podcasts

Stuff You Should Know
Dateline NBC

Dateline NBC

Current and classic episodes, featuring compelling true-crime mysteries, powerful documentaries and in-depth investigations. Follow now to get the latest episodes of Dateline NBC completely free, or subscribe to Dateline Premium for ad-free listening and exclusive bonus content: DatelinePremium.com

Betrayal: Weekly

Betrayal: Weekly

Betrayal Weekly is back for a brand new season. Every Thursday, Betrayal Weekly shares first-hand accounts of broken trust, shocking deceptions, and the trail of destruction they leave behind. Hosted by Andrea Gunning, this weekly ongoing series digs into real-life stories of betrayal and the aftermath. From stories of double lives to dark discoveries, these are cautionary tales and accounts of resilience against all odds. From the producers of the critically acclaimed Betrayal series, Betrayal Weekly drops new episodes every Thursday. Please join our Substack for additional exclusive content, curated book recommendations and community discussions. Sign up FREE by clicking this link Beyond Betrayal Substack. Join our community dedicated to truth, resilience and healing. Your voice matters! Be a part of our Betrayal journey on Substack. And make sure to check out Seasons 1-4 of Betrayal, along with Betrayal Weekly Season 1.

Music, radio and podcasts, all free. Listen online or download the iHeart App.

Connect

© 2025 iHeartMedia, Inc.