All Episodes

May 24, 2025 54 mins

LINKS:

 https://distrust.co/software.html - Software page with OSS software

Linux distro: https://codeberg.org/stagex/stagex

Milksad vulnerability:  https://milksad.info/

In this episode of Cybersecurity Today on the Weekend, host Jim Love engages in a captivating discussion with Anton Livaja  from Distrust. Anton shares his unique career transition from obtaining a BA in English literature at York University to delving into cybersecurity and tech. Anton recounts how he initially entered the tech field through a startup and quickly embraced programming and automation. The conversation covers Anton's interest in Bitcoin and blockchain technology, including the importance of stablecoins, and the frequent hacking incidents in the crypto space. Anton explains the intricacies of blockchain security, emphasizing the critical role of managing cryptographic keys. The dialogue also explores advanced security methodologies like full source bootstrapping and deterministic builds, and Anton elaborates on the significance of creating open-source software for enhanced security. As the discussion concludes, Anton highlights the need for continual curiosity, teamwork, and purpose-driven work in the cybersecurity field.

00:00 Introduction to Cybersecurity Today
00:17 Anton's Journey from Literature to Cybersecurity
01:08 First Foray into Programming and Automation
02:35 Blockchain and Its Real-World Applications
04:36 Security Challenges in Blockchain and Cryptocurrency
13:21 The Rise of Insider Threats and Social Engineering
16:40 Advanced Security Measures and Supply Chain Attacks
22:36 The Importance of Deterministic Builds and Full Source Bootstrapping
29:35 Making Open Source Software Accessible
31:29 Blockchain and Supply Chain Traceability
33:34 Ensuring Software Integrity and Security
38:20 The Role of AI in Code Review
40:37 The Milksad Incident
46:33 Introducing Distrust and Its Mission
52:23 Final Thoughts and Encouragement

Mark as Played
Transcript

Episode Transcript

Available transcripts are automatically generated. Complete accuracy is not guaranteed.
(00:00):
Welcome to CybersecurityToday on the Weekend.
This is another of those shows whereI have a fascinating discussion
with someone doing some interestingstuff in the cybersecurity arena.
Advanced warning.
We wander around on a lot of topics,but I think you'll find it interesting.
My guest is Anton Levi froma firm called Distrust.

(00:23):
I'm happy to meet you.
You've had a pretty incredible career.
First question I wanted to askyou was, how does a guy with a
BA in English literature fromYork become a security guy?
Except that I have, my undergraduateis in literature from York.
No way.
Okay.
There you go.
Just 20 years before you.

(00:44):
Gotcha.
See, you would understandbetter than most, yeah.
How you get from a placelike that to being in tech.
But yeah, it's I wasn'treally sure what to study.
I always liked reading and writing,so that's how I ended up doing my
Bachelor of Arts or English literature.
And then when I finished mydegree, it was time to get a job.
I. And it was, what am Igonna do with this degree?

(01:04):
Or either become a teacher or anauthor, maybe pivot into journalism.
But I got lucky and my friend gotme a job at a startup that had
a SaaS product that was if youimagine Netflix, but for Android
apps that you would usually pay for.
So you pay like a few dollars a monthand then you get access to a huge catalog
of paid apps that have a overall valueof, I dunno, $10,000 or something.

(01:25):
So I got my first job there andmy first job out of university
and I had to do business sales.
And my job was to go on the GooglePlay Store and look for developers
that hadn't been contracted yet to theplatform, and then check our internal CRM
And then if we didn't have thosedevelopers in there, I had to go
and reach out to them and try tosell them on joining the platform.

(01:46):
I had already had a little bit of.
Programming experience, but alot of my work seemed very like
something you could automate.
So I started practicing myscripting chops with Ruby and I
actually wrote a web crawler thatwould get me all of the developer
names from the Google Play Store.
And then I had a script that would checkin our CRM who's not in there, and it
would spit out a list of people to contactand then even further automated that.

(02:11):
So that would generate myemails based on a template.
And I basically went from, spendingall day clicking through Google Play
Store to running my bot and actuallyvery quickly won first place for most
developers contracted in a month.
And that was my foray into realprogramming where I really fell in
love with the idea of automation.
And it just showed me thepower of programming and that

(02:32):
kind of set me on that path.
Now, somewhere between there and here,you got really interested in Bitcoin,
or at least blockchain and I'm, Iwant to, I wanna separate the two
'cause I, the first question I haveis blockchain, is it still around?
Yeah, actually it's definitely aroundand there's been a little bit of a

(02:53):
struggle with finding good use cases,but a subset of these projects have
been pretty successful in finding reallife use cases that are quite useful.
There's a lot of opportunity seekingindividuals who are just creating
vaporware and not really contributinganything that, is worthwhile.
But there are projects that areactually doing useful things.

(03:14):
So one example I always give is stablecoins, which for those who aren't
familiar with is a digital asset thatis pegged to the value of some currency.
this is useful because not necessarilyin places where we have reliable
financial systems, like in Canada orthe United States, but for example,
in Argentina where there have beenevents where the banks did predatory

(03:38):
conversion of people saving.
So people had a lot of US dollarssaved up and the bank overnight without
permission converted that at a very.
Bad rate and not in favor of the user totheir local currency and wiped out a huge
amount of people's savings overnight.
And so for those people having a meansto bank themselves and have their

(03:59):
stable coins sitting in a digitalasset wallet that they control is
a really nice thing because theycan no longer trust their banks.
And you can extrapolatethat to other jurisdictions.
If you look at Venezuelavery unstable financially.
And so this is actually a verycool use case that's come out.
And stable coins are mucheasier to send around as well.
It can take a few days up toa week and it is expensive.

(04:22):
There's a lot of clearances.
I could not believe thehoops I had to go through.
because of anti-money laundering.
You sent anything over a thousand dollars.
you are just, tied up for days.
Precisely.
I always talk about Bitcoin as well,being a great alternative financial
system that's been growing and gettinga lot more adoption in all sorts of

(04:45):
industries, of course, in the financialsector, but that's its own kind of thing.
We could spend a wholepodcast talking about.
We could, but the essence of this,because this is a security podcast.
Yes.
The reason I was tracking back this,I'm always surprised at how much
hacking goes on in Bitcoin and crypto.
Why?

(05:05):
Because I thought blockchainwas inherently secure.
Remember, we were sold that bill of goods?
Yes.
Why is it so pop?
Obviously it's popular for thereason that people say you rob Banks
'cause that's where the money is.
You rob crypto becausethat's where the money is.
But why is it so prevalent in attacks?

(05:28):
Yeah.
So it's inherently secure.
The cryptography of blockchains is solid.
It uses the same cryptography used for TLSand elsewhere and has not been broken yet.
So it's designed well, but the problemarises from how people manage their.
Private keys, they're cryptographickeys that are actually the thing
that enable us to move funds aroundand issue commands to blockchains.

(05:50):
This is where things fail because there isthis impression even for people that are
in blockchains, that it's, this isolatedisland that is protected from the rest
of the world, but in fact, all of thesecurity flaws of traditional systems
that came before that their securityfootprint is inherited by blockchains.
Because at the end of the day, ifyou're interacting with a blockchain,

(06:13):
you're using some sort of a computerwith an operating system that you
might be using for a number of things.
Maybe you're opening PDFs from anemail, which is actually a very
common vector that's resultedin huge hacks in this space.
One time.
This is a crazy story where one ofthe biggest hacks in recent history
was $600 million with this projectcalled the Ronin occurred because

(06:36):
threat actors were able to convince.
somebody, a developer that was part ofthis project that they were trying to
hire them for a very lucrative position,and it was a multi-stage process where
they had a screening and then theywould pass them off to another person.
And eventually they sent them a joboffered to sign, which was A PDF, and

(06:58):
they received this PDF on their workcomputer and opened it and basically
installed malware on their machine.
And they were able to use thismalware to exfiltrate keys, which
were used for signing off on certainthings happening in the protocol.
And they were able to actually, by, by.
That way steal $600 million.
This is so funny 'cause we, I meanwe always make these fancy terms for

(07:19):
this, but people are talking about,link less phishing now and what you
really described as link less phishing.
Oh, then I ship you the,the PDF or whatever it is.
And yeah I wouldn't open a PDF that Igot on the first meeting with anybody,
even if I said it to myself, exactly.
But most people are very naiveabout this back in the day, washing

(07:39):
hands before doing things evenfor doctors was like a crazy idea.
And it took us about a hundredyears to normalize that.
And I think it's gonna take us a whileto normalize the idea of isolation.
So when I work with clients in mywork and personally, I actually use
an operating system called Cubesos which you may have come across.
If not it, it's.

(08:00):
Just what was, what?
Cube's os cube Os with QEBE.
Yeah.
Yeah.
I know the name.
I don't know anything about it.
Okay.
So it's a really amazingtool and very simple.
Basically they took a hypervisor,the Xian hypervisor, and wrapped
it up into an operating system.
And so what it does is it just makes itreally easy to spin up virtual machines

(08:22):
that are for, different purposes.
And so when you're opening A PDF,you don't trust, you could just
spin up a new virtual machine reallyquickly that's disposable, open your
PDF in it and then just trash it.
And so it creates this reallysimple way based on templates.
So you have templates.
Let's say maybe you have a Fedoratemplate or a Debian template.
You can create a cube and theycall them cubes, but they're

(08:44):
basically just virtual machines.
And so you can create purpose specific vm.
So all of a sudden it's notsuch a huge burden to create a
separate VM for accessing your AWS.
Account or managing an APItoken that's for production.
So you can create these little VMs andit just makes managing them a lot easier.
You could do it by other means.

(09:04):
Of course, there are other toolsfor virtualization, but this
just wraps everything up intoa nice seamless workstation.
Let's go back to our blockchain example,because I did want to, I did want to
go through that and talk about that.
So you spent a lot of time workingin this area over the past few years.
Yeah.
I've been in the industry for about 10years now, and the way I got into it was
really funny because one of my early jobs,after I described to you how I discovered

(09:28):
programming, but then I spent a lotof time studying and going to meetups.
I met some mentors in Torontothat really helped me early on.
I had this friend Josh, who'sfrom a funny name place in, I
think it's Manitoba Flynn Ffl.
And I would go and Flynn Flo's.
Not funny, to me it's,
I'm sure it's a lovely place, butI find the name yeah, just funny.

(09:50):
You got bigger Saskatchewan andFlynn Flo's, just flin font's dull.
That's, yeah.
You've got Tisdale, the land ofrape and honey in Saskatchewan.
You've got, oh my goodness.
There's all kinds of funnynames on the Prairie.
Yeah.
So I just, watch him program over hisshoulder and that's how I learned.
But I landed my first, proper programmerjob at a company called Wild Apricot.

(10:11):
And it just so happened that theowner of that company and the
CEO was Vitalic but's father.
So the inventor of Ethereum,I worked for his dad.
And so very early on, whenEthereum was becoming a thing.
Vitalik would come around andeveryone would be talking about
Ethereum, and I was like, oh, cool.
That's interesting.
I knew about Bitcoin already, but I never,I didn't dive into Bitcoin properly.

(10:34):
So my actual like full entry intoblockchains was through Ethereum.
And eventually I started learninghow to write smart contracts on
Ethereum, joined a hackathon.
And actually what participated in thathackathon with somebody who now runs
a big layer two, which is basicallyjust a throughput a way to increase the
throughput of a main net blockchain.

(10:54):
I fell into it, and then I reallyfell in love with what blockchains
were about, which was this reallyinteresting cross-section of game
theory and economics and of coursecryptography and distributed systems.
And as a, young programmer, itreally grabbed my attention.
And that was something that alsoreally pushed me into security
because all of a sudden, yourprivate key is actually money, right?

(11:17):
It controls money orsomething that's of value.
And that kind of raised thestakes and made me go interesting.
Yeah, this changes thegame in a lot of ways.
And eventually North Korea alsorealized this, and now a major
portion of their GDP is crypto theft.
And then unfortunately theyuse that to fund among studies
things, their nuclear program.
Yeah.
That and working for American companies

(11:39):
Yes.
The, another area that I find just.
Spectacularly amazing again, whereyou have all of these things that make
hiring someone secure, but you canstill hire somebody from North Korea.
Yes.
And that, I, I find that astonishing.
I have, and I've beendoing this for a long time.

(12:00):
I, back in 2003, had somebody workingfor me and they phoned me and said, I've
been working for you for six months.
Don't you think we should actually meet?
And I went yeah, we could do that.
But I ran a worldwide consultingpractice right after nine 11.
And we weren't allowed to travel, so Igot very used to doing phones primitive

(12:24):
sort of conferencing stuff at that time.
Sure.
It didn't make a lot of sense to fighttraffic in Toronto, so even then, I
still talked to my staff enough thatI think I would've spotted whether
they were North Korean or not.
Just amazing.
I'd love to know how they pulled this off.
Whoa.
I think now more than ever, it's becominga lot more dangerous because of deepfake

(12:48):
technologies, and that's something that'sbeen haunting me for the last few years
because, even three, four or five yearsago I realized that this would eventually
become a thing, once you can in real timedo video and audio deep fakes a lot of
what we use currently or used to use toverify who is, who just is out the window.
And North Korea seems to havestarted doing that a little bit.

(13:12):
They're still mostly just, doing oldschool social engineering, but I'm
sure soon enough there's gonna be alot more deep fake stuff going on,
and that will make it much harder.
But yeah, within the kind ofblockchain and digital asset space
I've definitely seen an increase inthis type of thing where, north Koreans
are trying to become an insider and.
A lot of my research and interest andwhat I work on with clients is, will be

(13:35):
around how do we protect from insiderthreats, and that's a very tricky game.
Coinbase just got nailed by People justsimply doing the old, we spent all this
time talking about the technology ofhacking and all of that, when just do the
old fashioned bribe some guy, exactly.
And basically that whole hack seemsto have been, they just bribed a few

(13:57):
people and they Were in positions wherethey had trust, so they managed to gain
all of that access to the blockchain.
and by the way, I think theywere being paid in stablecoin,
That even hackers don'twanna be speculators, yeah.
It's interesting.
I find that there's a lot of regulatoryrequirements around collecting KYC,

(14:18):
but not nearly as much about protectingthat KYC or it's not as stringent,
if it was up to me you would have tohave technical controls that dependent
on the sensitivity of the personalinformation or data in general,
requires increasingly more individualsand maybe across different teams.
So you could imagine if you needto access someone's passport, it's
not just you're a support person,you can just go in and extract it.

(14:41):
You would need to have, maybe you requestit, and then two other people have to
with the correct role approve that.
And on top of that, of course,layering things like how much data
and how quickly you can access.
So if you're trying to access morethan one record a minute, you're just
not gonna be able to do it and justlayer on those types of controls.
And I think that would reallydrastically reduce the blast radius.

(15:04):
Of what would happen in acase that someone gets bribed.
Of course it's not gonna stop everyattack, but it'll definitely reduce
the surface area drastically.
But companies just don't do thatbecause it's not commonplace.
No one's doing it.
So why would we do it?
We'll see.
Maybe Coinbase does something likethis now, and yet if you had a file
cabinet in your office and somebody hadto go and look up the personnel files,

(15:28):
they'd have to come to you for the key.
And the first question you mightask is, why do you need that file?
Sure.
Yeah.
And good luck hauling, a hundredthousand files out of the cabinet.
I get that part but I'm just sayingthat we don't seem to transport the
things that we would find natural toan automated security environment.

(15:50):
We would like this wholeidea of, need to know.
Is something we always had withphysical files always, and.
I appreciate that.
You're never gonna be perfect.
My wife always says, when weleave the house, lock the door.
We live in the country.
Somebody could just smashthe window and get in.
It's not that hard a deal.
They're gonna meet a dogthey don't want to meet.

(16:11):
but then I look at itand say, she's right.
You make it a little more difficult.
Sure.
Making, making your attacker'slife more difficult.
It's, and it's never gonna beimpenetrable, but you can always make
it hard enough that they give up.
Yeah.
Yeah.
So you stayed with it,with this, with blockchain.
What are some of the other thingsabout blockchain that people you'd

(16:33):
stand back and go, security peopleshould really be thinking about this.
What are some of the otherideas that you've seen over
the time you spent with this?
I've spent quite a bit of time workingon vaulting systems specifically,
or custody systems that are meantto protect private keys that manage
cryptocurrency or digital assets.
And this is where you really needto go as deep as you can down the
security rabbit hole because you aredefending from state funded actors

(16:56):
and all attacks are basically on thetable when someone's trying to steal
a few billion dollars worth of crypto.
Which are cases that I've dealt with andI've, with my colleagues worked on some of
the world's leading custodial platforms,like Bit Go and Unit four 10 and turnkey.
And so in these systems, you really needto go and explore every possible vector.

(17:18):
And so one, for example thatcame up is a very interesting
vector is compiler level attacks.
Or more on the supply chain side oftrying to, we saw the XE backdoor attempt.
I'm sure there are other attemptsthat are in progress or maybe some
that already successfully weredeployed that we're not even aware of.
And so how do we close off?
Some of those doors.

(17:38):
we have a show rule.
Before you go on, you can't just say theXE thing and assume everybody knows it.
So Fair enough.
You have to explain what that is.
'cause we have people who may have lives.
And do other things,
Sure.
So XE Backdoor was this really scaryinitiative that was a super long campaign
spanning about three years, where some,somebody were unclear who it is but likely

(18:02):
a state funded actor or very sophisticatedattacker at the least, won the trust of
a maintainer, a very widely used library.
That's a compression library.
And they use this as a means to tryto compromise the SSH demon that's
called, that's used to connect toservers remotely in a secure manner.
And they were super close to actuallysuccessfully getting their backdoor in.

(18:24):
And the only reason it was caughtwas because I believe somebody on
Microsoft was doing benchmarking,using a tool called Val Grind
to figure out the speed of SSHconnections and notice that there's,
maybe a 10 millisecond difference.
It was a really small difference, butthey noticed, and that's the reason
we weren't, the fallout would've beencrazy, a backdoor where you can just go

(18:46):
around the, all the defense mechanismand connect directly to most servers
around the world if they updated tothat version would've been horrendous.
And so the, I raised this as anexample of a type of attack and
there seems to be a trend towardsmore supply chain attacks because.
We're not really doing enough on it.
but this example, this is such a greatexample and that's, I'm glad you went

(19:07):
through the whole story because it'sone of those things we had I had a
whistleblower on the program about twoweeks ago, and the amount of pressure
that he got to stop investigating things.
Because he'd find an anomaly and he'draise it and he'd have to go I better
really figure this out because I can't gotalking about this with my boss, right?

(19:28):
And I went, oh my God, this guy atMicrosoft sitting there, God knows
when, but let's say it's 10 o'clockat night, he probably should be home.
He sees this little blip and hegoes, I gotta find out what that is.
And God bless them, whoever.
Absolutely.
His management was.
That he felt that was a good thing to do.

(19:51):
I don't know how much damagehe saved the world from.
Oh my goodness.
The curiosity of security people isnot something we talk a lot about.
And it's so essential.
Absolutely.
Caring, caring about security alwaysstays, half the game, people are like,
so what do we need to do to be secure?
I'm like, first you job number one, careabout security care, care enough about

(20:13):
security and be curious, like you saidotherwise you're not gonna make it.
. So we were talking aboutblockchain and just some of the
things that you discovered orseen there that really got right.
So I started talking about some methodsthat we saw were underutilized and that
you need to think about because of thesophisticated supply chain attacks.
So the two things that I've been spendinga lot of time and energy on is full source

(20:35):
bootstrapping which I can talk a littlebit about and at length, but essentially
just building from pure source code andnot hiding anything inside of binary.
So you don't want binaries when you'rebuilding things because they're hard
to inspect, they're opaque, and so wedon't really know what's inside of them.
So when we build, for example,even a compiler, ideally.

(20:55):
That's a really good thing to fullsource bootstrap because we want to
make sure that our compiler can'tgo and introduce vulnerabilities.
I'll talk about another reallyexciting attack now that happened
but I only discovered it recently.
I was surprised that I hadn't knownabout it, but it's called Xcode Ghost,
and it was this fascinating attackwhere in China, developers were having

(21:17):
a hard time connecting to Apple servers.
The problem is this version of Xcodewas fully functioning except the
compiler was modified so that whenyou build an application with it for
iOS, it actually injects a backdoorand they successfully compromised a
number of different applications anddownstream it impacted close to a

(21:39):
million devices and it was basicallyexfiltrating data from these devices.
And so this is a cool example.
It's a scary example of how acompiler can be the source of
compromise if you don't know.
Was it intentional or accidental?
It was intentional.
It was definitely intentional andrelied on the desperation of the
developers that were trying to gettheir tooling and they couldn't because

(22:01):
of the, great firewall of China.
And so they basically used this to theiradvantage and, got people to download
from a source they shouldn't have.
So exactly.
And so this is something that KenThompson, who was super prescient
back at, it was in the eighties atMIT, realized even if your source code
is fully reviewed and trusted, thecompiler itself could actually inject

(22:22):
malicious behavior into your binary.
And so this is something that wethought about a lot and realized that,
when you're building systems thatmanage billions, you need to actually
make sure that all of the softwareideally is full source bootstrapped.
The other method that wecame upon is deterministic or
reproducible builds, right?
So the idea that your softwareshould always compile through

(22:45):
the exact bit for bit binary.
And when you do this, it givesyou this really easy way to check
the integrity of your software.
I can talk about anotherattack right now that.
Most viewers probably know about,or listeners but maybe not the
SolarWinds incident that happened.
Yep.
Some years back we're allfamiliar with SolarWinds.
That attack could have been avoidedwith the help of reproducible builds.

(23:09):
And SolarWinds actually put out apaper after some time saying as much.
But the idea basically, or the failureof SolarWinds was that once the
binary was built, because it wasn'tdeterministic, and so like you build
it now, and then you build it thesame version five minutes later, the
binaries would be slightly different.
Not because of being functionallydifferent, but because there might

(23:31):
be a timestamp or some artifact fromthe chip set that the compiling is
being done on, or an environmentvariable that's slightly different.
But essentially, once the binaryis out of the build pipeline,
there's no way to check if thatbinary is what it's supposed to be.
With reproducible builds because we'reforcing the binary to be identical.
Now, what that enables us to do iseven if the binaries are already out of

(23:53):
the build pipeline and be being, maybeit's uploaded to a page where you can
download it, you, for example, as adeveloper could go and build it on your
machine and then compare that binary towhatever came out of the build pipeline.
And now even if the build pipelinewas compromised, the likelihood that
your developer computer and the buildpipeline was compromised is much lower.
And of course, you can extend thisand say, okay, we're a really big tech

(24:17):
IT company and our tools are used byFortune 500 and government organizations.
So what we're gonna do when we buildour software is we're gonna have
three different build servers onthree different cloud platforms.
We with separate access permissionsfor this infrastructure.
And we're gonna build the same softwareand then compare it and make sure that

(24:38):
all the binaries are exactly the same.
Now, the likelihood that all thosethree systems are compromised
in the same way is very low.
And so this is a really cool methodthat isn't really being used as widely.
And yeah, we spent sometime working on this stuff.
And actually me and me, my colleaguesand friends built our own Linux
distribution that makes this afundamental part of how we build it.

(24:58):
We start by full source bootstrappingand compiler, and then we do that
deterministically of course, as well,and then build the whole tree of
all the packages that are available.
Also with that compiler that we nowknow is trusted fully deterministically.
So anyone can go and reproduce the wholething and see if their hashes match.
Of course, that doesn't solve theproblem of trusting the source code.

(25:19):
You still need to, know what's in thesource code, but it does eliminate
the risk of what if the compilerinjects something or what if the build.
Environment has somethingmalicious and modifies the code.
And most software companies actuallyare exposed to that risk today.
But how do you, these differences areintroduced by factors like, as you said,

(25:40):
timestamps the peculiarities of the chip.
How do you filter that stuff out andstill know that you've got two identical
source or two identical binary files?
How do you basically get a rid ofthese little variations that make
your binaries non-deterministic?
A lot of the time it's as simple as.

(26:03):
Holding a gun up to the timestamp andsaying, you're not changing so you can
just fix the timestamp and, set UTC toone and that will actually force all of
the timestamps to always be identical.
And for some software, that'senough for other software.
You have to play with the compilerflags because, for example, when
you're doing certain parts of the buildprocess it might be non-deterministic

(26:27):
and it might do things in parallel.
And so because it's doing things inparallel, it might do them in different
order depending on when you're running it.
Sometimes you have to go and actuallypatch the software to change the
way it's actually built to, maybeit just intentionally goes and grabs
some details about the chip set it'sbeing built on and injects that.
And so we've had to actually go in forno js, this was the case, and we had to

(26:50):
work with the developers of no JS two.
Change this about how theybuild their software so that,
so we can make it deterministic.
But yeah, it can be timeconsuming for some software,
for other software it's easier.
Of course, you also want to do fullsource bootstrapping for languages.
So self compiled languages like rust.

(27:10):
If you download a rust binary fromsomewhere that you don't know how it
was built and then use that to buildthe next version of rust, the trust
level on that is much lower than ifyou go and first bootstrap the GCC
compiler and then or a tiny c compiler.
And then you go to the firstversion of rust and then the next
one and iteratively build up untilyou have a clear path from before.

(27:33):
Rest was self compiling all theway to the latest version of rest.
So this is another thing that wediscovered as a good defense mechanism
on making sure that cer basicallyit allows you to say certain.
Categories or classes ofattacks are no longer viable,
you basically cut those off.
And so you still have things to worryabout, but at least less things it, yeah.

(27:53):
And again, it's, obviously when we'retalking about it in this matter and
you talk about doing it the firsttime it's probably labor intensive.
Probably takes a lot to figure out.
Yeah.
But I'm just amazed that with all of theproblems we've had with open source code,
that people have not put out a protocolor something that says, Hey, this is

(28:15):
how you established that what you've gotis a binary compatible version of the
original and has not been interfered with.
Yeah.
I find this astonishing thatno one's really pursued that.
There are a few distributionsthat are working on this.
So the one that we built, it's calledStage X. And there are two other
ones that have similar approacheswhere they do boots, full source,

(28:36):
bootstrapping and determinism.
They're called nicks and geeks.
And geeks is actually a fork of nickswith slightly different approaches.
But none of the existingdistributions were as strict as we
want it to be on this idea of fullsource bootstrapping determinism.
So the one we built, we basically made arule and said only if something is full

(28:57):
source, bootstrapped and deterministic,and any changes that are made during
packaging, the packaging steps ofactually describing how the package is
created in our distribution has to bereviewed and signed by two individuals.
And so we basically took thesethings and tried to do something
akin to what you were just matching.
That was the idea.
How do we make something that's fullyopen source and free and helps people get

(29:22):
determinism and full source bootstrappinginto their systems as a kind of a building
foundation, of course, you need to forceyour application to also be deterministic.
To fully close off the loop onthat, but that's exactly the idea.
How do we get this intomore people's hands?
And now it's actually fairly easy to use.
It's it's almost an alpine replacement,like a drop in replacement.

(29:44):
So if you're using Alpine, Python orrust, you could just pull in an image.
It's on Docker hub of Stage X andit's the same software, same version,
except it's built differently.
So it's built using a fullsource Bootstrap compiler.
All the languages arefull source, bootstrap.
And it's fully deterministic,so it's pretty cool.
And yeah.
And this is available now.

(30:04):
This is, you've made thisopen source freely available?
Yeah, it's open source.
And we did this because a lotof our clients cared about
this kind of attack factor.
So it's a cool feedback loopwhere blockchain companies spent
money to do a lot of this workand then we open source the work
and made it available to everyone.
And it's really nice becauseit's already being used.

(30:25):
Actually, if you're familiar withTalos Linux they're very widely used.
The Linux distribution thatspecializes in Kubernetes.
They decided to use ourdistribution to build theirs.
Ours is essentially a securetool chain for building software.
And they thought, Hey, this wouldbe a good way to build our software
because we close off a bunch ofattack vectors on this compiler level

(30:45):
and environment kind of risk level.
I will be able to say, I knewyou when, boy, this is me.
You can you send me a link and I'llmake sure I post that in the show
notes, at least for the YouTube version.
That'd be very helpful because thelink, yeah, we spend a lot of, and
continue to spend a lot of time andenergy essentially all of our free time.
We, when we're not working onprojects that make us money, we go

(31:07):
and pivot to this, and sometimeswe're able lucky enough to actually
get some of this stuff funded too.
But it's a labor of passion and love andwe love to see people use it because we
believe it can help basically anyone.
That's the beautiful thing, it's oh,you're using Python in your stack.
Great.
Just drop in to our Python image.
Or if you're using rust or.

(31:27):
No JS or whatever.
Yeah, no, this would be handy.
The other thing that just drives me crazywhen we talk about blockchain was one of
the first use cases for blockchain that Iever heard that made business sense other
than crypto, was the idea of traceability.
And in the food system, like a reallymessy system full of people who are

(31:52):
not necessarily PhDs through this.
And I'm not making fun of them, I'm justsaying this is not an organized system.
yet I can trace a piece of lettucethat is here in Minden, Ontario, which
is in the middle of nowhere, backto the original field where it was.
Because if they get an outbreak ofsome sort of e coli or something

(32:14):
like that, they need to be ableto pull it off the shelves.
And yet.
With all of the brain power we havein blockchain, we've never been
able to establish the providenceof software and software elements
or modules in the same way.
Yeah.
that's a good question and avery difficult problem that

(32:35):
people have been trying to solvefor a long time, and especially
because blockchains fit the shape.
It's yeah, this should be the solution.
I think a part of the problem is that.
Blockchains are very good at preservingdata once it's on the chain, but it's very
hard to get high quality data into it.
And if you think about the blockchaintrying to fit into the operational

(32:57):
complexity of a supply chain thatspans, maybe multiple countries
and different points where the datahas to be entered, I think that's
where things start to break down.
If the how do you make sure thatreliable stuff gets in a blockchain?
Because if you get poo on a blockchain,it'll just be poo on a blockchain
and it's not gonna be great.
So I think that's the operationaland logistical parts of how these

(33:18):
supply chains work is the hard part.
Once you have good data, you canthrow it basically on anything.
In a sense, the blockchain's just the.
Distributed data layer.
It's not really anything crazy.
It's not anything crazy.
It's actually pretty crazy.
Yeah.
It's, yeah.
It's not something you doin an afternoon, but it is.
But this, have you given any thought tothis and I'm, I just asked this while
we're just talking as to, you've obviouslyconquered one piece of this and say, I

(33:43):
can establish the providence of a binary.
I can make sure that it's the same.
Could we apply some logic to thatin the open source world so that we
are just making sure we are actuallygetting the correct modules and that
we're not getting somebody's Yes.
Yeah.
The answer is yes.
And we've been thinking aboutthis a lot this idea of how do we

(34:05):
eliminate single points of trustin any individual or computer.
I think that's like acore idea and and yeah.
Concept to filter ideasthrough and trying to.
Think about holistic security that,that, thinks about the full life
cycle of software that we consume.
And so we talked about, yeah,like you said, this part of,
compilers and and determinismand what that, that can address.

(34:27):
But still a lot of holes remaineven on the level of, if you look at
compression algorithms and tarballs.
So what with the XXI attack, what wasactually happening was that there were
additional artifacts in the tarballthat were not in the source code.
And so it was hiding in the tarball.
And this is not somethingyou would think about.
You think, oh, source code goesinto the tar ball, which is just

(34:49):
the source code, and then you untar it and you do stuff with it.
But how do you make surethat's actually the case?
So that's the one step before what we weretalking about where it's how do we verify
that we're actually building from thesource code we're supposed to be building
from without any extra stuff in there.
So here we are trying tobuild some tooling as well.

(35:10):
There's a project that I foundthat is, I forgot what it's called.
It's like predictablebinaries, or it's by a Fosse.
I would've to look it up.
But it's essentially a tool thatdetects binaries and source code.
And so that's one idea.
You can try to find binaries,eliminate binaries, 'cause
those could sneak things in.
But then also can we create a way tostandardize the source code and make

(35:34):
sure that anyone can easily check thatthe source code is exactly the source
code you're meant to be building from.
But we can go even earlier than thatwhere we need to review the source code
to make sure that there isn't, if someonewrites some malicious code into the
source code and no one's looking at it,that there's no way to really address it.
So we need reviews.

(35:55):
So one way that, that we've beenthinking about that in my group of
friends and colleagues, is how about wecreate a crowdsourced kind of protocol?
We are calling it SIG Rev or yeah,SIG Rev, which is like sign reviews.
So the idea is let's make it easyto create reviews for specific

(36:18):
versions of software, and thencryptographically sign them using PGP.
Or you could use, you couldsign in whatever you like, but
PGP is probably a good case.
And then start building a database ofsoftware that's been reviewed, and you
can go and let's say you're an independentresearcher, if you reviewed some piece
of software, you could say exactly whichcommit or which state you reviewed, have a

(36:39):
hash of the tree, and then say, I reviewedfor this and that, and then sign it off.
And this is another thing thatI feel is a huge hole right now.
If you look at the NPM andPython package ecosystems,
they're full of literal malware.
And for whatever reason, we got toa place where we just feel okay with

(36:59):
downloading tons of third party librarieswithout actually reading the code.
And we just say we use staticanalysis, so it'll be fine.
But it's not fine because staticanalysis only catches things
that we've previously detected.
And so novel stuff we don'thave signatures for, similar
to how malware is, right?
Like polymorphic malware, it just slightlychanges the shape of the malware and all

(37:20):
of a sudden we're blind to it, right?
We're trying to develop toolsto fight this, but why aren't we
reviewing all the source code?
And there are a few answers for this.
Often 80% of an applicationis third party code.
That's just open source libraries, right?
So how are we gonna review all this?
It's hundreds of thousandsor millions of lines.
But the responsible thing to do,especially if you're running a

(37:43):
business that is maybe financialinstitution or manages a ton of personal
formation, would be to literallyreview every line of code, right?
And you actually say, okay, what if weall collectively, as we reviewed software,
posted our results and said, yeah, wereviewed this stuff and we actually found
it to be secure for this specific version.
Now you might use a differentversion and say this hasn't been

(38:03):
reviewed, so I have to reviewthe difference between those two.
But over time, if we were alldoing this sort of thing, we would
start building up a repository ofknowledge of which source code.
Has at least a slightlyhigher level of trust.
'cause right now it's justa lot of people use it.
Probably somebody looked atit, but we don't really know.
I have to they come and take my podcastor license away if I don't ask about

(38:25):
AI at some point in the conversation.
It's just the way it's forgiveme on this one, but why hasn't
anybody started to look at usingAI to do some of these tasks?
Especially when you talkabout, multi-lines of code.
One of the things you, we can havea great debate about whether AI
can write code or not and we can,that's a religion now is to what you
believe rather than anything else.

(38:46):
But the one thing I think we all agree onis it's really good at documenting code.
Definitely, and I've been definitelyvery surprised to not be reading
a lot about people who aresaying, we've got this problem.
I'm gonna apply AI to it.
It, have you heard of anybody doing this?
I haven't looked into it too much,but I am very much in favor of it.
I think it still probablyis in its infancy.

(39:07):
I wouldn't trust a tool like that as likea, oh, we have this now we can hands off.
But the way I think about it is why nothave an additional input or, a layer
that you can put into your automation,the whole shift left to use that term.
you have your linter, you haveyour static analysis tools, and now
you have your AI analysis tools.
And as people figure it out and theybecome more sophisticated and advance, I

(39:30):
think yeah, they could be very effective.
I don't know when we get to apoint where we can fully trust.
These tools to make judgements.
Why would you have to fully trust it?
The, it's the old thing of,that's why we build in layers.
'cause we don't fully trust anything.
Exactly.
Exactly.
Fact, my friend David Shipley wouldtell me that organizations that
believe they're totally protectedare the most likely to get hacked.

(39:52):
You need Andy Grove saidonly the paranoid survive.
I think that's true in security.
but having another layer We're alreadyusing AI to look for anomalous.
Things in, in software that says,Hey, we don't have a signature
for this, but this looks weird.
Totally.
Yeah.
It doesn't hurt it.
When I first started thinking aboutit, I was kinda like, wait, no,
but we don't need to rely on ai.

(40:13):
It's literally just another set of inputs.
So Great.
I'll take more inputs.
Anything?
We rely on people.
Sure.
And people make mistakes.
I always find this just totallyamazing, is that everybody says
yeah, but AI made a mistake.
And you go yeah.
And so did a person, yeah.
It learned from us and what's your point?
Yeah.
It's, nothing's perfect.
That's why we have layers.

(40:34):
That was at least my understanding of it.
Yeah, absolutely.
I have to ask you just in a, justgoing onto this thing, I was looking
through your LinkedIn and I was readingabout this Milk Casad disclosure.
Oh yeah.
And I glad you brought that up.
I know I'm drifting into a newtopic, but this was so fascinating.
We've talked about some of the humanthings we can do, but this really was.

(40:55):
A mechanical or a technical error.
Yeah.
That just went by everybody.
Yeah.
It was very, a very interesting bug thatactually impacted one of my friends.
And they lost a bunch of Bitcoin.
And the really scary part is that theywent and followed a very widely used and
popular book called Mastering Bitcoin.

(41:16):
And this book recommended using a specificsoftware to generate your bitcoin wallet.
And so it recommended that you do it inan offline environment, on an air gap
system that's not internet connected.
And so they followed these instructionsand did all this, and then years
later, their wallet got drained.
we assembled a research groupof friends who started looking
into how this happened.

(41:37):
eventually we discovered it was.
A mistake in how they implemented thecryptography around randomness or entropy.
And they used what's called themesan twister, which is a random
number generator algorithm, butit's not a cryptographically
secure random number generator.
So it's actually usually usedfor montecarlo simulations.

(41:58):
It's not meant to be usedfor generating private keys.
By doing this, they reducethe key size to only 32 bits.
With a well optimized cracking algorithmyou can map out fully in about a day.
So this is exactly what we did, andwe were able to reproduce, basically
build the whole, all of the keysthat, were part of that key space.

(42:21):
And so on milk Sad Info, thereare a series of blogs that one
of our teammates, Christian.
Wrote about.
And so it goes into the details, reallylow level of how it all worked and how
we actually reproduced the vulnerability.
And we had a responsible disclosureand back and forth with the developers.

(42:42):
But it was a very interesting, we actuallypresented it at K Communication Congress
last year in, in Germany and Hamburg.
So there's also a talk recorded about it.
So if you prefer videoformat, you can watch that.
'cause I found thisjust fascinating again.
Yeah, it was, it's for all the technologywe have, it's really nobody being curious

(43:05):
enough to go, that's only 32 bits, man.
The gaming machine willget through that day.
It's a very tricky.
Bug, because you need to know cryptographyrelatively well to figure this one out.
And also, I've seen a lot of, booksthat teach computer science recommend
using the current time as the seedfor randomness and time isn't random.

(43:26):
And so we also have this layer of justbad information floating around there.
And that's partially whatleads to these mistakes.
And developers can overestimatetheir ability a little bit when it
comes to security and cryptography.
And cryptography is somethingthat even I as somebody who
spent a lot of time working oncryptography take very seriously.
And I'm basically scared of, like,whenever I work with cryptographic

(43:47):
algorithms it's like dealingwith nuclear codes, exactly.
But how do you make sure thatyou're, how do you satisfy yourself
that you've done the due diligence?
I actually go and work with like multipledifferent cryptographers Exactly.
That's the only way around we all go onabout how security has to be built in, not
bolted on, and sorry if I sound preachy onthis thing, but it's just, what I would.

(44:13):
Have done.
'cause I was the world's lousy.
Maybe you've gotta be stupider asa programmer and 'cause every time
I did something, I went, I don'treally know how to handle this.
I'm gonna go to somebodywho really knows this.
Exactly.
And we would do what we call chalk talkat the time, which became even weird
when we were actually in a basement ofsomeone where, like Waterloo, you could,
where they actually have chalkboards.

(44:34):
They probably still 'em to this day,but what would chalkboard, we'd be in
somebody's on somebody's whiteboard withmarkers and still call it chalk talk.
But it was that whole thing of go tosomebody smarter than you have them
walk you through what you're doing.
They may not even be a total expert,but they'll ask dumb questions.
Absolutely.
Which are exactly theones that, that's it.

(44:55):
That always get you into trouble.
Yeah.
I'll go talk to my friends who haveexpertise in areas that I haven't
been or delved into as deeply.
Or I'll go on IRC and find those,really, arcane knowledge people on
something that, I need to find out about.
And so yeah, there are people you cantalk to, but yeah, you just need to be a
little bit humble Exactly, because it'snot the times when you don't know it.

(45:16):
Even, like I said, I, because I wasnever the smartest programmer in the
world, I always had to ask people,make sure I was doing things right.
But it's the guys who really thinkthey know stuff that scare me.
Yeah.
'cause they don't ever have to ask.
And I used to work with onearchitect and he was like, he just
wanted to get the program to bethe smallest thing in the world.
Even after, like in the early days when Istarted, if you wrote a program that was

(45:41):
more than 5K, you had to do an overlay.
Because we did, that's, we didn't,we only had about three and a
half K memory that was available.
I wish we still had such constraints.
That's one of the thingsthat really irks me.
Like we have so much processing powernow, and instead of making our software
super fast and efficient, it's actuallysome of the least efficient software ever.
Yeah.
I think Word has 30 million lines of code.

(46:03):
Oh my God.
I'm going to, to a word processor,that's I've given up on Microsoft
a long time ago, but, yeah.
I finally gave up.
Even if it's seven bucks a month, I gaveup paying the tax to Microsoft to use.
Yeah, good.
A word processor.
Yeah.
That, I'm just like living in Vim now.
Markdown files are my, go-to andit's been a while now that I've.

(46:24):
I've been left all that behind.
This has been a most amazing chat.
I want to talk, just close itoff a little bit about you.
You've and I always do these things.
I tell people they can't do acommercial on here, but you're
working for a firm called Distrust.
I always figure if people get to theend of the show that you've done a good
job then, and they're still listening.
'cause the audience that I've gotin, particularly in cybersecurity,

(46:45):
if you actually try to sell themanything they'll, it evil on me.
They'll say but let's talkabout your company distrust.
First of all, who came upwith the name that Yeah.
So it was my co-founder.
And it's basically this idea of youshouldn't trust, you should be able to
verify and so therefore distrust It'sa company that my co-founder started
a while back, probably five yearsback, and then I joined and started

(47:06):
working with him maybe three years ago.
But it's basically a firm that specializesin working with high risk clients.
So a lot of it came from working with.
Custodians and vaulting solutionslike I mentioned earlier.
And we basically learned alot of really great methods
to protect sensitive systems.
So we work with electrical gridoperators and we work with hedge funds.

(47:29):
And anyone who has really sensitivedata or cryptographic keys, they
want to protect, we help them.
And we think very holistically.
And we think about this idea that Ialso mentioned earlier of how do we
eliminate single points of failure in asystem because it's like nuclear, launch.
Like you have two keys but what ifwe say we need four keys, or, we play

(47:49):
with this idea of a quorum of peoplethat are required to do something.
So maybe you have seven peoplethat have the right permissions,
and you need four of those sevenpeople for something to happen.
So we methodically think of waysto, at every layer, including,
the Linux distribution or theoperating system that's built.
How do we make sure there's no onemaintainer, for example, they can go.
And insert some code somewhereinto one of the dependencies

(48:12):
and undermine the integrity ofeverything that comes after it.
And that's basically the game we play.
We have a threat model on our website.
It's basically talks about these differentlevels of how we think about it, but
our approach to threat modeling isbasically how do we defend from state
funded actors at the most extreme end.
instead of trying to think about specificvulnerabilities like CBEs, we think about

(48:35):
how can we eliminate entire categoriesof attacks and make it so that you don't
rely on one person or computer for theintegrity of your system a lot of what we
do is consulting and we'll be on retainerswith companies, like a fractional csar,
fractional security engineering team.
all of the rest of our time goesto building these open source
tools and we have a bunch of them.
We have some custom operating systems.

(48:56):
Work a lot with remoteattestation technology.
So getting a server tocryptographically prove what software
it's running and a prerequisitefor this are deterministic builds.
You need the software to be deterministicbecause essentially the idea is, let's
say you use a VPN you don't want tojust trust the VPN provider that they
don't log anything like Pinky Promise.

(49:18):
What if the server could actuallycryptographically attest to
the software it's running?
So imagine that portion of theirstack is open source software.
And it's basically thething that you connect to.
Now, if it's open source, you can readand review the code and say, okay, great.
This actually doesn'tlog anything amazing.
You build the software locally.
Because it's deterministic.

(49:38):
You expect the hash that you get locallyto be the same as what the server
tells you is running on that server.
And so there is a special type ofhardware called a TPM, a trusted
platform module, which is basically asecure chip that can go and it has its
own keys that only it has access to.
There are these things called PCRs.
So between those two, you can basicallyget the measurement of the server, figure

(50:02):
out what the binary is that's running onthe server, and then cryptographically
sign it with the secure chip.
And now you get a. Cryptographicat the station from the server
proving that it's running thecode that you reviewed locally.
So this amazing building block anda level of trust that I think in the
future we're going to see a lot more,but it's just a fundamental building
block that is underutilized right now.

(50:25):
And so that's another example of,where we spend time building, we
built an operating system calledEnclave Os that's specifically to
make this sort of thing easier.
So basically wherever we find thereare opportunities to make it easier
to access these defense mechanisms,we create open source software and
say, here we go, please use it.

(50:45):
Help us build it.
And everything we dobasically is open source.
So yeah that's a remote, that's a stationis a really interesting topic as well.
And then the Caymans, I actually.
Ca came here and opened a companycalled Caution because Benjamin
Franklin once said that the parentsof security are distrust and caution
will actually be creating some productsthat are like managed products to help.

(51:07):
For example, one of our productswill be a service where you can
ask us to build your software andgive you a hash of what we built.
So when you're doing deterministicor reproducible builds as a part
of your CICD, you could say,Hey, what's the hash of this?
And so you have another separate systemreproducing your software that you
don't have to set up for yourself.
So that's just one exampleof what we'll be doing.

(51:28):
But I'm gonna tell you guys, if youwant to get outta the security game,
you could go to most of the AI companiesand help them name their products.
So they didn't have stupid names.
All of our stuff is named very simply.
Like we have an operating systemthat's for offline operations.
It's called Air Gap os.
We have an operating system for enclaves.
It's called Enclave Os.

(51:49):
We have, the Linux distribution, thatfull source bootstraps everything.
It's called stage X becauseuse stages to bootstrap things.
So this is a new stage.
And so yeah, we try to usejust very simple names.
This has been fantastic, Anton.
it was great to meet you.
And for anybody who's listening to this,if they're still here with us we met

(52:09):
because a friend recommended you and Iurge people to do this if somebody's got,
knows somebody's doing interesting stuff.
I love to, every once in a while just doan interview that is just talking about
interesting stuff that people are doing.
So that's, this has been great.
Thank you.
Before we go though, I always say thatpeople, when I was heading up things,
everybody thinks that the CEO or thehead of an organization is running it.

(52:31):
Now.
We're just selling trust me and I alwaystook away one thing was that was when
you left a room, what you really wantedto do when you were in the room was
make people remember you when you left.
In other words, the question was,what are they thinking about when
you leave, when you're not there.
So at the end of this, what do you wantpeople to remember from this conversation

(52:51):
we had, we'll have to remember.
I mean it, I don't wanna soundtoo generic, but that it really
matters what impact your work has.
And, for me it is thinking about whatmy skills can best be used for to

(53:12):
improve the security, privacy, andfreedom of as many people as I can.
And the way I found to do thisis through, primarily open source
software that creates this, likebuilding blocks that improve security.
And so I would just encourageeveryone to try to find.
There a sweet spot for that,where their skills meet, what the
world needs, what they can alsoget paid for and what they love.

(53:34):
This is like this, I ripped the Japaneseconcept of Ikigai, but I'll like, say
five or Moe Gaudet who says, stop workingfor money, start working for purpose.
That's it.
And I've, but my passion of, freeingthese tools and open sourcing
everything is definitely the trade Itake any day over just making money.
This has been fascinating.

(53:54):
My guest has been Anton Levi, andhe is with a firm called Distrust.
Love to hear your comments, and ifyou know someone who's doing some
fascinating work or if you're one ofthose people, or if you've just got
an idea for a show, drop me a note.
Maybe we'll get a chance tochat and some of the chats
actually make it onto the show.
You can reach me ateditorial@technewsday.ca, or you can

(54:16):
find me on LinkedIn, or if you'rewatching on YouTube, you know what to do.
Just leave a comment under the video.
David Shipley will be sittingin on Monday, and I'll be back
in the news chair on Wednesday.
I'm your host, Jim Love.
Thanks for listening.
Advertise With Us

Popular Podcasts

On Purpose with Jay Shetty

On Purpose with Jay Shetty

I’m Jay Shetty host of On Purpose the worlds #1 Mental Health podcast and I’m so grateful you found us. I started this podcast 5 years ago to invite you into conversations and workshops that are designed to help make you happier, healthier and more healed. I believe that when you (yes you) feel seen, heard and understood you’re able to deal with relationship struggles, work challenges and life’s ups and downs with more ease and grace. I interview experts, celebrities, thought leaders and athletes so that we can grow our mindset, build better habits and uncover a side of them we’ve never seen before. New episodes every Monday and Friday. Your support means the world to me and I don’t take it for granted — click the follow button and leave a review to help us spread the love with On Purpose. I can’t wait for you to listen to your first or 500th episode!

Crime Junkie

Crime Junkie

Does hearing about a true crime case always leave you scouring the internet for the truth behind the story? Dive into your next mystery with Crime Junkie. Every Monday, join your host Ashley Flowers as she unravels all the details of infamous and underreported true crime cases with her best friend Brit Prawat. From cold cases to missing persons and heroes in our community who seek justice, Crime Junkie is your destination for theories and stories you won’t hear anywhere else. Whether you're a seasoned true crime enthusiast or new to the genre, you'll find yourself on the edge of your seat awaiting a new episode every Monday. If you can never get enough true crime... Congratulations, you’ve found your people. Follow to join a community of Crime Junkies! Crime Junkie is presented by audiochuck Media Company.

Ridiculous History

Ridiculous History

History is beautiful, brutal and, often, ridiculous. Join Ben Bowlin and Noel Brown as they dive into some of the weirdest stories from across the span of human civilization in Ridiculous History, a podcast by iHeartRadio.

Music, radio and podcasts, all free. Listen online or download the iHeart App.

Connect

© 2025 iHeartMedia, Inc.