All Episodes

July 8, 2025 45 mins

Are you tired of skyrocketing AI infrastructure costs? What if a viral Reddit idea could change everything? In this episode, we dive into Memvid, a groundbreaking open-source project that's turning AI memory on its head by using video compression.

Imagine storing millions of text documents as a single, tiny MP4 file, then searching it in milliseconds – all without expensive GPUs or complex databases. We reveal the real-world experiment where 10,000 PDFs were compressed to just 1.4GB, slashing RAM usage from over 8GB to 200MB, and working completely offline!    

Could Memvid's CPU-friendly retrieval make costly NVIDIA GPU infrastructure obsolete for many AI tasks?  We explore how this innovation is democratizing AI, enabling powerful edge and offline applications, and revolutionizing AI software testing with portable "test brains."    

Tune in to discover if Memvid is the future of affordable, efficient AI. Don't miss this deep dive into the tech that's got everyone talking!

Support the show

Thanks for tuning into this episode of Dean Does QA!

  • Connect with Dean: Find Dean's latest written content and connect on LinkedIn: @deanbodart
  • Support the Podcast: If you found this episode valuable, please subscribe, rate, share, and review us on your favorite podcast platform. Your support helps us reach more listeners!
  • Subscribe to DDQA+: Elevate your AI knowledge with DDQA+, our premium subscription! Subscribe and get early access to new episodes and exclusive content to keep you ahead.
  • Got a Question? Send us your thoughts or topics you'd like us to cover at dean.bodart@conative.be
Mark as Played
Transcript

Episode Transcript

Available transcripts are automatically generated. Complete accuracy is not guaranteed.
SPEAKER_02 (00:00):
Welcome, welcome, welcome to the Deep Dive.

(00:21):
We are genuinely thrilled you'rejoining us today because, let's
be honest, the world of AI isnot just moving fast, it's
practically breaking the soundbarrier.

SPEAKER_03 (00:29):
Oh, absolutely.
Every single day.

SPEAKER_02 (00:30):
Every day,

SPEAKER_03 (00:31):
yeah.

SPEAKER_02 (00:31):
It feels like there's a new breakthrough, a
new technology, a new buzzwordthat promises to fundamentally
change how we interact withinformation and machines.
And what's consistently at thevery core of so much of that
buzz, it's the incessant demandfor more power, relentless
efficiency, and perhaps mostcrucially, dramatically lower

(00:53):
costs.

SPEAKER_03 (00:54):
Exactly.
That cost factor is huge.

SPEAKER_02 (00:56):
Huge.
So today we're embarking on adeep dive into a concept that is
truly mind-bending, almostdefies intuition, and if it
lives up to its increasinglysolid hype, could save
organizations not just thousandsbut millions of dollars.

SPEAKER_04 (01:08):
Isly has, yeah.

SPEAKER_02 (01:10):
So the core question we're grappling with, the one
that sparked this entireinvestigation

SPEAKER_04 (01:14):
is,

SPEAKER_02 (01:15):
What if AI could remember things in a radically
different way?
And what if that fundamentalshift in memory management
translated directly into amassive financial and
operational win for everyoneinvolved?

SPEAKER_03 (01:27):
You've hit on precisely why this topic has
generated so much buzz.
For all the truly incredibleadvancements we're witnessing in
AI, you know, generative models,creating stunning art, large
language models, draftingcomplex code.

SPEAKER_02 (01:40):
Right, the stuff that gets all the headlines.

SPEAKER_03 (01:42):
Exactly.
But the underlyinginfrastructure, the sheer
computational power required totrain and run these systems, the
massive storage needs, it alladds up to a staggering cost
burden and an almostunbelievable level of
operational complexity.

SPEAKER_02 (01:57):
Yeah, the backend stuff nobody talks about.

SPEAKER_03 (01:59):
Right.
We're talking about vast datacenters consuming immense
amounts of energy, requiringhighly specialized, often
bespoke hardware, and demandingconstant expert maintenance.

SPEAKER_02 (02:08):
That sounds exhausting just thinking about
it.

SPEAKER_03 (02:10):
It is.
The current challenges inscaling and affording advanced
AI are immense, almost abottleneck for broader adopters.
So any genuine innovation thataddresses these core issues that
offers a smarter, more efficientparadigm is, well, it's by
definition a game changer of thehighest order.

SPEAKER_02 (02:28):
That sentiment resonates deeply with what we've
been tracking, and it's whywe're taking this specific deep
dive.
We're unpacking a fascinatingconcept that initially started
as a trending, slightlyincredulous discussion on
Reddit.

SPEAKER_03 (02:41):
Yeah, I remember seeing that pop up.

SPEAKER_02 (02:42):
But it quickly escalated into a topic of
serious academic and industrialconversation.
within the AI and softwarequality assurance communities.
Our source material today comesdirectly from a highly
insightful series, the Dean DoesQA LinkedIn series by Dean
Bodart.

SPEAKER_03 (02:58):
Great series, by the way.

SPEAKER_02 (02:59):
Absolutely.
Specifically, his 18th episode,which provocatively asks, game
over for NVIDIA.
Reddit says this AI memory hacksaves millions.

SPEAKER_03 (03:07):
That title definitely grabs your attention.

SPEAKER_02 (03:10):
It does.
And this isn't merely aninteresting thought experiment
or hypothetical musing.
It's a tangible, activelydeveloped open source And it
proposes using something asseemingly utterly unrelated as
video compression to manageA.I.''s knowledge base, its
memory.

SPEAKER_04 (03:29):
Right.

SPEAKER_02 (03:30):
It sounds unusual, almost like science fiction,
doesn't it?
But the implications, once youstart to peel back the layers,
are truly profound.

SPEAKER_03 (03:37):
It absolutely sounds counterintuitive at first
glance.
When you hear AI memory andvideo compression in the same
sentence, your brain does alittle double take.

SPEAKER_02 (03:45):
Mine certainly did.

SPEAKER_03 (03:45):
But our mission today is to thoroughly explore
this open source project, whichis called Memvid.
It's creating significant wavesprecisely because it promises to
dramatically cut down those veryAI costs and complexities we
just highlighted.
In fact, it's already sparkingvery serious conversations
within in the industry aboutwhether it could fundamentally
challenge the entrenched needfor some of the most expensive

(04:08):
and specialized hardware in theAI world.
You know, the very hardware thatcurrently forms the backbone of
countless AI operations.

SPEAKER_02 (04:15):
The NVIDIAs of the world.

SPEAKER_03 (04:17):
Exactly.
For you, the listener, what thisultimately means is that we're
looking at smarter,significantly more affordable
and incredibly portable ways tohandle AI data.
This could fundamentallytransform how advanced AI is
built, deployed and accessed,bringing Okay, so

SPEAKER_02 (04:36):
let's truly unpack this, because when you first
hear the phrase videocompression for AI memory, your
brain might just short circuit alittle, or at least mine did.

SPEAKER_03 (04:46):
Yeah, it's understandable.

SPEAKER_02 (04:47):
We're so accustomed to thinking about data storage
in very specific, traditionalways.
Hierarchical file systems,relational databases, vast
cloud-based object storage, orspecialized high-performance
distributed systems.

SPEAKER_03 (05:00):
The usual suspects.

SPEAKER_02 (05:01):
Right, but here's Memvid coming along and asking.

SPEAKER_03 (05:03):
Yeah.

SPEAKER_02 (05:04):
What if you could use the very technology that
delivers your favoriteblockbuster movies or funny cat
videos to manage AI's long-termmemory?

SPEAKER_03 (05:11):
It's a bold move.

SPEAKER_02 (05:13):
It's truly a surprising and almost subversive
concept and that initialstrangeness, that wait what
moment, is precisely what makesit so captivating and, dare I
say, brilliant.

SPEAKER_03 (05:21):
And that initial clarification is crucial.
It's important to state up frontthat Memvid isn't trying to
replace all traditional memorysystems or every form of data
storage.

SPEAKER_02 (05:31):
Okay, that's a good point.
It's not a silver No,

SPEAKER_03 (05:33):
not at all.
What it offers is a unique,highly specialized, and
incredibly efficient approachfor specific types of AI memory,
particularly for large, static,or semi-static knowledge bases
that AI models need to referenceconstantly.

SPEAKER_02 (05:49):
Gotcha.
Like reference libraries.

SPEAKER_03 (05:50):
Exactly.
When we typically discuss big,expensive, traditional AI memory
systems in the context of largelanguage models or complex
retrieval augmented generationsystems, we're often referring
to what are known as vectordatabases.

SPEAKER_02 (06:04):
Right, the things that help AI understand meaning,
not just keywords.

SPEAKER_03 (06:08):
Precisely.
These are highly specializeddata stores designed to store
high-dimensional vectors,essentially, vast arrays of
numbers that represent theabstract meaning of data,
whether it's text, images, oraudio.
AI models use these vectors tounderstand and retrieve
information by finding similarconcepts, not just exact keyword
matches.

SPEAKER_02 (06:26):
But building and maintaining those sounds.
Complex.

SPEAKER_03 (06:29):
It is.
It demands powerful, dedicated,often GPU-accelerated computing
infrastructure, significantstorage capacity, and constant
complex maintenance forindexing, updating, and scaling.
All of which, as we've noted,translates directly into
extremely high operationalcosts.
Memvid steps in with acompletely different, almost

(06:51):
elegant paradigm that bypassesmuch of that traditional
overhead.

SPEAKER_02 (06:55):
So how does this clever trick work then?
Without all the technical jargonthat usually makes my eyes glaze
over, promise.

SPEAKER_03 (07:01):
Okay, challenge accepted.
No jargon.
Let's

SPEAKER_02 (07:03):
break down how Memvid fundamentally turns raw
information into somethingthat's not only familiar to our
existing digital infrastructure,but also entirely new and
incredibly efficient for AI.
Imagine you have a truly immensevolume of documents, let's say
the entire Wikipedia database,or all the technical manuals for
a complex piece of machinery, orevery legal precedent in a

(07:24):
specific jurisdiction.

SPEAKER_03 (07:25):
A mountain of data.

SPEAKER_02 (07:26):
Exactly.
This is the mountain ofknowledge you want your AI to
have instant access to.
The first step in Memvid'sprocesses is fascinating.
It takes those raw documents, bethey text files, PDFs, or web
pages, and intelligently breaksthem down into smaller
digestible pieces.
Think of it like taking a hugecomprehensive textbook and
meticulously splitting it intoindividual paragraphs or even

(07:49):
distinct sentences, eachrepresenting a discrete chunk of
information.

SPEAKER_03 (07:53):
Breaking it down.
Makes sense.

SPEAKER_02 (07:55):
Here's where it gets really interesting and where
Memvid deviates from typicalcompression.
Each one of these small piecesof text then gets what we can
best describe as a uniquedigital DNA sequence or a highly
specific unique hash.

SPEAKER_03 (08:09):
A digital fingerprint.

SPEAKER_02 (08:10):
Kind of.
But this isn't just a randomidentifier.
It's designed so that the AI,when it encounters this
fingerprint, instantlyrecognizes not only what that
piece of information is, butmore profoundly, its semantic
meaning.

SPEAKER_03 (08:22):
Ah, the context.

SPEAKER_02 (08:24):
Yes, and how it relates contextually to every
other piece of data within itsentire knowledge base.
It's about establishing arelationship, a context.
Once that's done, this processThis audio was created with

(08:44):
Podcastle.com.

SPEAKER_00 (09:02):
AI.

SPEAKER_03 (09:05):
My special code is absolutely crucial because it
prepares the data for its nextrather unconventional but
remarkably powerful step.

SPEAKER_04 (09:13):
Right.

SPEAKER_03 (09:13):
This is where Memvid truly leverages the immense
power of an existing decades oldand globally optimized
technology that most of usinteract with daily without a
second thought.
It's a brilliant example ofcross-disciplinary innovation.

SPEAKER_02 (09:26):
Which brings us to step two.
Taking these highly efficientspecial codes, these digital DNA
sequences of your knowledge, andcleverly packing them into
individual frames of standardvideo file.

SPEAKER_03 (09:36):
Yep, a video file.

SPEAKER_02 (09:37):
Yes, you heard that right.
A regular run-of-the-mill videofile, typically an MP4.

SPEAKER_03 (09:41):
It still sounds wild when you say it like that.

SPEAKER_02 (09:43):
It does.
The sheer genius here is thatMemvid isn't attempting to
invent a new proprietarycompression algorithm from
scratch.
Instead, it's brilliantlyutilizing the same incredibly
sophisticated and highlyoptimized compression technology
that makes your favorite moviesstream seamlessly on Netflix.

SPEAKER_03 (10:01):
Or YouTube videos load instantly.

SPEAKER_02 (10:03):
Exactly.
Or allows you to upload Yeah.
That would be entertaining.

UNKNOWN (10:17):
Yeah.

SPEAKER_02 (10:35):
data in video.
That's a key distinction.

SPEAKER_03 (10:37):
It is.
And the implications of this fordata efficiency are truly
profound.
Video compression algorithmslike H264 or HEVC are the result
of decades of intense researchand development.

SPEAKER_02 (10:48):
Billions invested.

SPEAKER_03 (10:49):
Absolutely.
They are incrediblysophisticated, optimized not
just for minimizing file size,but also for rapid encoding,
efficient transmission and swifthardware accelerated
decompression.
By piggybacking on thisexisting, incredibly mature and
widely technology, Memvidinherently gains all those
benefits.
Immense space savings, ease ofdistribution, and native support

(11:13):
across virtually every computingdevice.

SPEAKER_02 (11:15):
So it just works everywhere.

SPEAKER_03 (11:16):
Pretty much.
It's not just about shrinkingthe raw file size.
It's about structuring the datain a way that allows for
incredibly rapid targetedaccess, almost like having a
perfectly indexed pre-cachedknowledge library.
This architectural decision torepurpose and leverage
ubiquitous video codecs for datastorage fundamentally fists the

(11:37):
hardware and softwarerequirements for managing AI
knowledge.

SPEAKER_02 (11:40):
Right.

SPEAKER_03 (11:40):
It's about working smarter, not necessarily harder,
with technologies we alreadyhave at our disposal.

SPEAKER_02 (11:45):
Okay, so you've now got this single video file
that's essentially a dense,compressed library full of your
AI's special knowledge codes.
But here's the criticalquestion.
How does the AI actually use it?
Good

SPEAKER_03 (11:54):
question.

SPEAKER_02 (11:55):
How does it retrieve specific information without
having to play the entire videofile from start to finish, which
would be wildly inefficient?

SPEAKER_03 (12:03):
Right.
That would defeat the wholepurpose.

SPEAKER_02 (12:05):
That's step three, instant search and retrieval.
When the AI needs to findsomething, Memvid doesn't
initiate a sequential videoplayback.

SPEAKER_03 (12:14):
Definitely not.

SPEAKER_02 (12:15):
Instead, it employs a smart internal index, almost
like a super fast, perfectlyorganized table of contents that
lives alongside the video file.

SPEAKER_03 (12:24):
Think of it like the chapter markers on a DVD, but
way more sophisticated.

SPEAKER_02 (12:28):
Exactly.
This index allows Memvid toquickly and precisely jump
directly to the right videoframes or even specific data
blocks within those frames thatcontain the information the AI
is looking for.
It then rapidly reads thosespecial codes directly from the
frames and instantly retrievesthe relevant information you
need.

SPEAKER_03 (12:48):
And instantly is the key word

SPEAKER_02 (12:50):
here.
And when we say instantly, we'retalking about retrieval times
designed to happen in less thana second.

SPEAKER_03 (12:54):
Which is incredibly fast for large data sets.

SPEAKER_02 (12:57):
Right.
What's truly fascinating hereand critically important for
cost savings is that this entireprocess from breaking down the
text to retrieving informationdoesn't need expensive
specialized graphics cards orcomplex high maintenance
database servers.

SPEAKER_03 (13:12):
No big iron required.

SPEAKER_02 (13:13):
Exactly.
It runs on surprisingly modesthardware.

SPEAKER_03 (13:16):
And that, I believe, is the true aha moment for many
listeners.

SPEAKER_02 (13:20):
Yeah.

SPEAKER_03 (13:21):
Because it beautifully demonstrates how a
seemingly utterly unrelatedtechnology, the very one that
powers our binge watchingentertainment, can be so clever
Right.
It really makes you thinkdifferently.

SPEAKER_02 (13:54):
It

SPEAKER_03 (13:55):
embodies the principle of finding elegant
solutions in unexpected places.
leading to drastically reducedinfrastructure needs.
It really pushes us to rethinkhow we conceive of and interact
with data.

SPEAKER_02 (14:09):
Now at this point you might be thinking, this
sounds almost too good to betrue.
Where's the hard proof?

SPEAKER_03 (14:14):
Always the question, show me the data.

SPEAKER_02 (14:16):
Exactly.
Where are the benchmarks tovalidate such a bold claim?
And that brings us to what wasreally an accidental but
undeniably compellingbreakthrough that truly ignited
the whole thing.

SPEAKER_03 (14:29):
Ah, the origin story.

SPEAKER_02 (14:30):
Yeah.
The initial buzz around Memvidwasn't just theoretical.
It exploded onto the scenebecause of a compelling
real-world experiment shared bya developer who published his
findings in the Dean Does QAseries.

SPEAKER_03 (14:42):
Source for today's dive.

SPEAKER_02 (14:43):
Right.
This single demo Transcriptionby CastingWords

SPEAKER_03 (15:01):
That's absolutely right.
The developer took a highlytangible real-world data set
specifically.
10,000 diverse PDF documents.

SPEAKER_02 (15:09):
10,000.
That's a lot of docs.

SPEAKER_03 (15:11):
It really is.
Just imagine the sheer volume ofinformation, the varied content,
and the typical storagefootprint required for 10,000
individual documents.
These weren't tiny files.
They represented a significantcorpus of knowledge.
And he managed to compress allof that into a single video file
that was only 1.4 gigabytes insize.

SPEAKER_02 (15:31):
Wow.
1.4 gigs for 10,000 PDFs.
That's tiny.

SPEAKER_03 (15:35):
It's incredibly dense.
To provide some context for you,our listeners, 1.4 gigabytes is
roughly the size of a relativelyshort high definition movie,
maybe an hour or so in length.
But this single compact filecontained the full searchable
knowledge of 10,000 distinctdocuments.
Amazing.
That alone is a testament to theincredible efficiency of the
underlying video compressiontechnology that Memvids so

(15:58):
brilliantly leverages.
It's an almost unbelievabledensity of information.

SPEAKER_02 (16:02):
And the performance.
This is where it gets reallyinteresting and where the game
over for Nvidia question startsto feel less like hyperbole and
more like a genuine challenge.
Searching through all thatinformation.
All those 10,000 PDFsmeticulously packed into a 1.4
gigabyte file was almostindistinguishable in speed from

(16:23):
using a massively costlyenterprise-grade commercial
system.

SPEAKER_03 (16:27):
Which is astonishing.

SPEAKER_02 (16:28):
The article provided the exact comparison.
Memvid achieved retrieval speedsof approximately 900
milliseconds.

SPEAKER_03 (16:34):
Just under a second.

SPEAKER_02 (16:36):
Compared to 820 milliseconds for a leading
commercial solution.

SPEAKER_03 (16:40):
So, super close.

SPEAKER_02 (16:41):
We're talking about a difference of merely 80
milliseconds.
To put that into perspective,human reaction time is typically
around 200 milliseconds.

SPEAKER_03 (16:48):
So you wouldn't even notice it.

SPEAKER_02 (16:50):
Exactly.
This 80 millisecond differenceis practically imperceptible to
a human user, yet it wasachieved with a radically
different and far less resourceintensive approach.

SPEAKER_03 (16:59):
But here's the real kicker, the absolute game
changer in this compellingdemonstration, especially for
organizations grappling withescalating infrastructure costs.

SPEAKER_02 (17:08):
What's the kicker?

SPEAKER_03 (17:09):
The memory footprint.
The MemBit solution required anastonishingly low amount of
computer memory just 200megabytes of RAM.

SPEAKER_02 (17:16):
200 megs?
That's nothing.

SPEAKER_03 (17:19):
It's tiny.
Contrast that with thetraditional, costly commercial
system, which needed over 8gigabytes of RAM to achieve
similar retrieval speeds.

SPEAKER_02 (17:27):
8 gigs versus 200 megs.

SPEAKER_03 (17:29):
Yep.
That's a monumental differencein resource consumption, nearly
40 times less memory required.
Imagine the immediate savings inhardware costs, energy, and
cooling.

SPEAKER_02 (17:39):
Yeah, that's huge savings right there.

SPEAKER_03 (17:41):
And perhaps even more critically for real-world,
decentralized applications This

SPEAKER_02 (17:45):
audio

SPEAKER_03 (17:49):
was created with Podcastle.ai.

(18:13):
in secure or air-gappedenvironments where internet
access is restricted or simplyunreliable.
Think like a factory floor, anautonomous vehicle, or even
battlefield operations.

SPEAKER_02 (18:23):
That opens up so many possibilities.

SPEAKER_03 (18:25):
It vastly reduces operational costs, enhances
security, and significantlyimproves reliability and
responsiveness, especially forcritical use cases like edge AI
and embedded systems.

SPEAKER_02 (18:37):
So it really started as this surprising, almost
whimsical experiment.
And as you mentioned, there wasa fair a bit of initial
skepticism.

SPEAKER_03 (18:44):
Oh, yeah, definitely.
The meme comments were flying.

SPEAKER_02 (18:47):
Right.
While some, even many...
Initially thought it was just ameme or a clever coding joke.
You know, one of those quirkytech ideas that floats around
the Internet for a day and thenvanishes.
That rigorously conducted reallife demonstration really, truly
shifted to protection.
It forced the industry to takenotice.

SPEAKER_03 (19:06):
It absolutely did.
The creator of Memvid, veryclear in his series.
This project isn't aboutreplacing every single
traditional file storage methodor completely overhauling how we
store every piece of data in theworld.

SPEAKER_02 (19:18):
Right.
Managing expectations.

SPEAKER_03 (19:19):
Exactly.
Exactly.
That would be an unrealisticgoal.
Instead, the fundamental intentis about creating a truly new
paradigm shifting way for AI toaccess knowledge that's
incredibly portable, worksreliably offline without
external dependencies, and caneven be broadcast or distributed
like a standard video file.

SPEAKER_02 (19:38):
Broadcast knowledge.
That's a cool concept.

SPEAKER_03 (19:40):
Imagine the implications.
You could potentially send anentire comprehensive AI
knowledge base to a device via asimple file transfer or even
stream it, and it would justwork instantly.

SPEAKER_02 (19:52):
Like updating its brain over the air?

SPEAKER_03 (19:54):
Kind of.
This real-life demonstrationprovided the tangible proof and
practical utility of thisunconventional approach, rapidly
moving it from a curious conceptto a legitimate disruptive
technology.
It's a powerful reminder ofother emerging technologies
throughout history thatinitially faced immense
skepticism.
Think about the early days ofcloud computing, or even before

(20:14):
that, very internet itself.

SPEAKER_02 (20:16):
Yeah, people laughed at the idea of online shopping
once.

SPEAKER_03 (20:18):
Exactly.
People needed to see thepractical application, the
demonstrable value, to trulygrasp the monumental potential.
Memvid is clearly on that sametrajectory.

SPEAKER_02 (20:28):
Okay, so that compelling demonstration and the
underlying architectural shiftleads us directly to the
elephant in the room, theprovocative question emblazoned
right there in the article'stitle.
Game over for NVIDIA.

SPEAKER_03 (20:40):
The big question.

SPEAKER_02 (20:41):
It's a bold statement, designed to grab
attention, but it forces us todeeply consider the current AI
hardware landscape.
For years, Powerful graphicsprocessing units, or GPUs,
especially those manufactured byNVIDIA.

SPEAKER_03 (20:55):
The dominant player.

SPEAKER_02 (20:57):
Right.
They have been, withoutexaggeration, the absolute
workhorses of modern AI.
They have been indispensable,almost synonymous with AI
processing power.

SPEAKER_03 (21:07):
And for very good reason.
GPUs are engineered withliterally thousands of
processing cores, making themexceptionally adept at parallel
processing, the ability toperform many, many calculations
simultaneously.

SPEAKER_02 (21:18):
Like a swarm of bees were working together.

SPEAKER_03 (21:20):
That's a good analogy.
This specific architecture isabsolutely essential for
printing the incredibly complexdata hungry AI models we see
today, which involve crunchingvast multidimensional data sets
through neural networks.
This is an inherentlyparallelizable task that GPUs
excel at.
They are also crucial forrunning large scale AI
applications, particularly thosethat rely on massive, constantly

(21:44):
querying vector databases forquick data retrieval, which
again is another parallelprocessing challenge.

SPEAKER_02 (21:50):
But they're expensive.

SPEAKER_03 (21:51):
Very.
These powerful, specialized GPUscome with a hefty price tag,
often running into thousands,even tens of thousands of
dollars per unit.
Ouch.
And they demand considerablepower consumption and
sophisticated cooling systems.
Both of these factors contributeheavily to the astronomically
high cost of building,maintaining, and scaling modern
AI infrastructure.

(22:12):
NVIDIA has undeniably built anempire on this necessity.

SPEAKER_02 (22:16):
So given that context, where exactly does
Memvid fit into this picture.
How does it fundamentallychallenge that entrenched GPU
dominance for certain AI tasks?
It seems almost counterintuitivegiven the reliance on video,
which we associate with thegraphics, right?

SPEAKER_03 (22:30):
It's a fascinating, almost paradoxical architectural
shift.
While Memvid's newer, highlyoptimized version, which is
being developed in theprogramming language, Rust.

SPEAKER_02 (22:39):
Ah, Rust.
Fast and safe.

SPEAKER_03 (22:41):
Exactly.
While that version can indeedleverage GPUs for the initial
encoding process, that is,converting your raw data into
the compressed video memoryformat to make that initial data
ingestion incredibly fast.
The real game changer and way ittruly impacts the GPU question
lies in its efficiency duringretrieval.

SPEAKER_02 (23:02):
The retrieval part.

SPEAKER_03 (23:03):
This is where the core innovation and the cost
saving potential truly manifest.

SPEAKER_02 (23:07):
So if I understand correctly, when the AI actually
needs to pull specificinformation out of that dense
video memory, that's where themagic happens and where the GPU
demand largely vanishes.

SPEAKER_03 (23:18):
Precisely.
Memvid is in ingeniouslydesigned to be incredibly
CPU-friendly for searching andaccessing information.

SPEAKER_02 (23:24):
GPU-friendly.

SPEAKER_03 (23:25):
This means that once your AI's comprehensive
knowledge base is packed into acompact memvid file, you don't
necessarily need an expensivehigh-end GPU just to retrieve
that information quickly andefficiently.

SPEAKER_02 (23:35):
You can use the chip you probably already have.

SPEAKER_03 (23:37):
Largely, yes.
The bulk of the processing fordata access shifts from the
specialized, costly, andpower-hungry GPU to the more
general purpose, ubiquitous, andsignificantly more affordable
CPU This fundamental shiftdrastically alters the cost
equation.

SPEAKER_02 (23:54):
And the low memory helps too, right?

SPEAKER_03 (23:55):
Absolutely.
Furthermore, it's incredibly lowand constant memory usage, a
mere 500 megabytes of RAM,regardless of the actual data
size contained within the Memvidfile.

SPEAKER_02 (24:05):
Constant.
It doesn't grow with the data.

SPEAKER_03 (24:07):
Nope.
It further diminishes the needfor specialized, high capacity
or expensive hardware.
Unlike traditional databasesystems that scale RAM usage
with the data volume, Memvidoffers a predictable minimum
minimal memory footprint.
This fundamentally changes thehardware requirements for AI
deployment, opening uppossibilities that were
previously economicallyunfeasible.

SPEAKER_02 (24:28):
And this isn't just an isolated, clever trick.
It seems to align perfectly witha much broader, accelerating
trend in the AI hardwarelandscape, doesn't it?
It feels like Memvid is riding abigger wave.

SPEAKER_03 (24:38):
It absolutely does.
And this is where Memvid'sstrategic importance truly
shines.
Its approach perfectly alignswith the growing trend towards
local AI computing and the riseof unified memory architectures.

SPEAKER_02 (24:50):
Like Apple's M chips.

SPEAKER_03 (24:51):
Exactly.
Think about revolutionary chipslike Apple's M series processors
or even AMD's newer APUs, whichare increasingly integrating the
CPU, GPU, and system memory ontoa single, highly optimized chip
package.
These designs are fundamentallychallenging the traditional
dominance of discrete, separateGPUs for running many AI tasks

(25:14):
directly on everyday devices.

SPEAKER_02 (25:16):
Right, you don't always need that giant graphics
card anymore.

SPEAKER_03 (25:18):
For certain tasks, No.
Instead of needing a dedicated,separate, powerful GPU with its
own memory, these unified memorychips are optimized for
efficient AI processing directlyon the device, leveraging a
shared pool of memory.
Memvid plays directly into thisby making AI memory retrieval
remarkably efficient onstandard, less power-hungry CPUs
and integrated memory systems.

SPEAKER_02 (25:37):
So if it's the hardware trend.

SPEAKER_03 (25:39):
Perfectly.
What this means for the broaderpicture is nothing short of
transformative.
By significantly lowering thehardware requirements for
deploying a vast array of AIapplications, Memvid could lead
to a dramatic reduction inreliance on those costly
specialized GPU infrastructuresfor specific AI workloads.

SPEAKER_02 (25:58):
Like retrieval.

SPEAKER_03 (25:59):
Exactly like retrieval.
This translates directly intopotentially saving organizations
millions in initial hardwareinvestments, but also
substantial ongoing operationalcosts, including electricity
consumption for power andcooling.

SPEAKER_02 (26:13):
That's

SPEAKER_03 (26:13):
huge.
For you, the listener, thiscould mean more powerful, more
private, and more responsive AIexperiences directly on the
devices you already own, yourlaptop, your smartphone, your
smart home devices, or even yourcar's navigation system, without
needing a constant highbandwidth connection to a
massive cloud GPU farm.

SPEAKER_02 (26:29):
AI on my phone?
That actually works welloffline.

SPEAKER_03 (26:32):
That's the promise.
It accelerates the move towardsmore private, more responsive,
and more robust AI applicationsrunning right at the edge of the
network, closer to the datasource and the user.
enables a future where AI isn'tconfined to massive data
centers, but is distributed,pervasive, and truly accessible.

SPEAKER_02 (26:51):
This is truly fascinating because it feels
like Memvid isn't just a clevertechnical tweak.
It's a foundational tool thatcould fundamentally reshape the
future of AI in several crucialways.
First, and perhaps mostimpactful for many
organizations, it directlyaddresses one of the biggest,
most persistent hurdles, cost.
It's about making advanced AIsignificantly more accessible to

(27:13):
a much wider This

SPEAKER_00 (27:16):
audio was created with Podcastle.ai

SPEAKER_02 (27:21):
That's arguably

SPEAKER_03 (27:26):
the most critical implication.
By dramatically cutting down thecost and effort of running
advanced AI applicationsspecifically, by removing the
need for huge, expensivecomputer setups and complex
recurring cloud servicesubscriptions, Memvid
effectively democratizes AI.

SPEAKER_02 (27:42):
Bringing AI to the masses, so to speak.

SPEAKER_03 (27:44):
In a way, yes.
What was once the exclusivedomain of only the largest tech
giants, like Google, Amazon, orMicrosoft, Microsoft with their
multi-billion dollar budgets forinfrastructure can now be
realistically within reach forsmaller companies, agile
startups, academic researchers,and even individual developers
operating on tight budgets.
It's powering.

(28:05):
It is.
Imagine a small startup in, say,rural Arkansas developing a
groundbreaking AI-poweredcustomer service tool or a
specialized data analysisplatform for a niche industry.
With traditional methods, they'dbe forced to invest heavily in
cloud resources or build outWow.
And it enables anotherincredibly

SPEAKER_02 (28:46):
exciting prospect that feels like it's straight
out of a futuristic novel.
AI that works literallyanywhere.

SPEAKER_03 (28:51):
Yes.
Think about the truly immensepractical implications here.
Imagine AI that runs directly onyour smartphone, a smart home
device in your living room, oreven the onboard computer of a
self-driving car or anindustrial robot on a remote
factory floor.

SPEAKER_02 (29:07):
All without internet.

SPEAKER_03 (29:08):
All without needing a constant reliable internet
connection.
Memvid's offline-first design,combined with its ability to be
easily copied and moved like anystandard digital file, makes
this a tangible reality.

SPEAKER_04 (29:21):
Okay.

SPEAKER_03 (29:21):
This capability is absolutely perfect for what we
call edge AI, where devices needto be intelligent, autonomous,
and responsive, even whenthey're disconnected from the
central cloud.

SPEAKER_02 (29:32):
Edge AI is getting so much attention now.

SPEAKER_03 (29:34):
It is, and this feeds right into it.
This has profound implicationsfor privacy, as sensitive data
can remain on the device withoutever needing to be transmitted
to the cloud.
It enhances responsiveness, asthere's no network latency.
And critically, it vastlyimproves reliability, especially
in mission-critical applicationswhere internet connectivity
might be intermittent,non-existent, or subject to

(29:56):
security vulnerabilities.

SPEAKER_02 (29:57):
Like medical devices.

SPEAKER_03 (29:58):
Absolutely.
Consider AI-powered diagnosticsfor medical devices in remote
clinics, or smart agriculturalsensors making real-time
decisions in fields far fromcellular towers, the
possibilities are trulyboundless.

SPEAKER_02 (30:12):
And it's not just about portability and
accessibility.
It's also about a fundamentallysmarter, more efficient approach
to data storage itself.

SPEAKER_03 (30:20):
When you talk about smarter data storage, how does
Memvid specifically change thegame here compared to what we're
traditionally used to?

SPEAKER_02 (30:28):
Traditional data storage for large-scale AI often
involves vast, complex, anddistributed databases, where
information is fragmented andstored across multiple servers,
potentially in differentgeographic locations.

SPEAKER_00 (30:40):
Sounds messy.

SPEAKER_02 (30:40):
It can be.
While effective for massivescale, this approach can be
incredibly complex,resource-intensive, and costly
to manage, maintain, and secure.
Memvid offers a profoundlyunique way to make AI's
knowledge bases not justdramatically smaller in
footprint, but also incrediblyeasier to manage and deploy.

SPEAKER_03 (30:58):
Instead of merely shrinking individual pieces of
data, which many compressionalgorithms already do, Memvid
takes the entire collection ofinformation that constitutes an
AI's knowledge and compresses itinto one single, highly compact
video file.

SPEAKER_02 (31:13):
The whole thing in one file.

SPEAKER_03 (31:14):
Exactly.
Think of it as creating acomplete, self-contained
knowledge capsule.

SPEAKER_02 (31:19):
I like that.
A knowledge Your

SPEAKER_03 (31:21):
AI can literally carry its entire understanding
of a subject, its entirecognitive context with it,
allowing it to operateindependently and intelligently
rather than needing toconstantly query external
distributed and often expensivesystems.
This capsule approachfundamentally simplifies
deployment, streamlines updates,and significantly reduces the

(31:42):
complexity of overall datamanagement for AI, giving it a
truly portable brain.

SPEAKER_02 (31:47):
That mental image of a knowledge capsule makes
perfect sense.
Let's pivot to a very practical,real-world application where
this technology could have animmediate and massive impact.
The rapidly evolving world ofsoftware testing.

SPEAKER_03 (32:00):
Ah, yes.
QA.

SPEAKER_02 (32:01):
It's a field that's increasingly leveraging AI to
automate, accelerate, andsignificantly improve how we
ensure software quality.

SPEAKER_03 (32:10):
It truly is a perfect fit, almost as if Memvid
was designed for it.
Modern AI testing tools,especially those that aim for
intelligent automation,predictive analysis, or advanced
bug detection, they requireimmediate comprehensive access
to an immense amount of contextand historical data.

SPEAKER_02 (32:27):
What kind of data are we talking about?

SPEAKER_03 (32:28):
We're talking about everything from detailed user
requirements and functionaldesign plans to logs from past
test runs, comprehensive bugreports, performance metrics,
code change histories, and eveninternal development FAQs.

SPEAKER_02 (32:42):
A huge mix of

SPEAKER_03 (32:43):
stuff.
A huge mix.
Memvid provides a revolutionaryway to handle this colossal
volume of disparate data.
It can store all this text-baseddata and, as we'll discuss,
potentially much more in thefuture in highly compressed,
instantly searchable videomemory files.
This transforms what wouldotherwise be static documents
scattered across various systemsinto an active, intelligent, and

(33:04):
unified knowledge base that AItesting agents can query in real
time.

SPEAKER_02 (33:08):
Giving the AI testers instant context.

SPEAKER_03 (33:11):
Exactly.
Rich, on-demand context.

SPEAKER_02 (33:14):
And the offline first aspect sounds like it
would be incredibly useful,almost indispensable for many
testing scenarios, wouldn't it?

SPEAKER_03 (33:21):
Crucially so.
A significant portion ofsoftware testing, particularly
for embedded systems, hardwareand integration or highly secure
applications happens in isolatedlab environments or on devices
that simply don't have internetaccess.

SPEAKER_02 (33:34):
Or can't have it for security.

SPEAKER_03 (33:36):
Precisely.
Or where connectivity isdeliberately restricted for
security reasons.
Memvid's offline first designmeans that these AI testing
platforms can function withcomplete reliability and full
access to their knowledge insuch situations.
They always have immediateaccess to the comprehensive data
they need to make intelligentdecisions without depending on
external So testing

SPEAKER_02 (34:02):
doesn't stop if the Wi-Fi drops.

SPEAKER_03 (34:05):
Pretty much.
This guarantees continuity androbustness in critical testing
workflows.
Imagine testing an automotive AIsystem in a test track
environment with no network or aclassified government system in
an air-gapped lab.
Memvid ensures the AI still hasits full brain available.

SPEAKER_02 (34:22):
And the speed.
How does that nearlyinstantaneous retrieval directly
benefit AI in the demandingworld of testing.

SPEAKER_03 (34:30):
The speed is absolutely paramount because
MemBit can retrieve informationin less than a second.
Remember that impressive 900millisecond performance against
commercial systems.

SPEAKER_02 (34:40):
Right, practically instant.

SPEAKER_03 (34:41):
AI testing agents can get instant context for
their tasks.
This immediacy acceleratesvarious key testing activities.
Whether it's an AI needing toinstantly understand a nuanced
user story to generate relevanttest cases, quickly recall a
past bug pattern to predictregressions, rapidly reference a
design specification forvalidation.

SPEAKER_02 (34:59):
Or check performance data.

SPEAKER_03 (35:01):
Exactly.
Or analyze historicalperformance data to optimize
test prioritization.
This rapid retrieval means theAI isn't waiting for data.
This allows for more fluid,efficient, and intelligent test
execution and analysis,ultimately leading to faster
feedback cycles for developersand higher quality software.

SPEAKER_02 (35:18):
I could really imagine how this would be a game
changer for large distributedteams, having these complete
portable test brains that are soIt's so easy to move around and
share.

SPEAKER_03 (35:28):
Exactly.
Picture this scenario.
Entire comprehensive collectionsof testing knowledge, all the
detailed requirements documents,the intricate design
specifications, years ofhistorical bug data, even
internal FAQs and best practicesfor the product.
All of it can be meticulouslypackaged as single compact
portable memvid video files.

SPEAKER_02 (35:50):
The test brains.

SPEAKER_03 (35:51):
We can call them truly self-contained test
brains.
These portable test brains canthen be effortlessly shared
among different development, QA,and even operations teams,
regardless of their geographicallocation.

SPEAKER_02 (36:00):
So everyone's on the same page.

SPEAKER_03 (36:02):
Instantly.
They can be instantly deployedto various testing environments,
whether they are virtualmachines, physical labs,
cloud-based statingenvironments, or remote testing
rigs.
This ensures that every singleteam member and every AI agent
is working with the exact same,consistent, and most
importantly, up-to-dateinformation.

SPEAKER_01 (36:21):
That solves a lot of it.

SPEAKER_00 (36:25):
This audio was created That

SPEAKER_03 (36:30):
solves a lot of headaches.
Could just copy over a singleMemvid file.

SPEAKER_02 (36:56):
And boom, ready to go.

SPEAKER_03 (36:57):
And their AI testing agents would instantly have all
the necessary context to begintheir work without any complex
setup or data synchronization.

SPEAKER_02 (37:04):
And finally, a really crucial point for any
kind of robust data-driven work,especially in quality assurance,
tracking changes and versioncontrol.

SPEAKER_03 (37:12):
Yes.

SPEAKER_02 (37:13):
This often feels like a missing piece in AI
knowledge bases.

SPEAKER_03 (37:16):
You've hit on a critical capability.
Just like software developersrely on sophisticated version
control systems, like Git, totrack every minute change made
to their code.

SPEAKER_02 (37:26):
Which is essential.

SPEAKER_03 (37:27):
Absolutely.
Memvid introduces the ability toversion control your AI test
datasets.
Since the entire knowledge baseis encapsulated within a single
self-contained file, you canmanage different versions of
that file with the same ease asmanaging code versions.

SPEAKER_02 (37:43):
So you can track history.

SPEAKER_03 (37:45):
Meticulously.
This means you can meticulouslytrack every change made to the
underlying test data, whetherit's an updated requirement
document, a new designspecification, or a newly
discovered bug report.
This granular versioning isindispensable because it ensures
the tests are repeatable.
You can always precisely revertto a specific version of your
knowledge base to rerun testsunder identical conditions.

SPEAKER_02 (38:07):
That's huge for debugging.

SPEAKER_03 (38:08):
Huge.
More profoundly, it helps youdeeply understand how updates to
your data affect the AI'sperformance and, by extension,
the overall quality and behaviorof the software being tested.
It brings an unprecedented levelof rigor, traceability, and
auditability to AI poweredquality assurance, which is

(38:28):
invaluable for compliance anddebugging.

SPEAKER_02 (38:31):
This is all so incredibly compelling and it
truly makes you wonder, what'snext for Memvid?
It's clearly a technology that'sstill in its burgeoning phases.

SPEAKER_03 (38:39):
Still growing, yeah.

SPEAKER_02 (38:41):
But with some really ambitious and exciting plans for
the future that could unlockeven more potential.

SPEAKER_03 (38:45):
Indeed.
While the current capabilitiesare already impressively
disruptive, the roadmap forMemvid is even more ambitious,
designed to broaden itsapplicability significantly.
First, they aim to move beyondtext.

SPEAKER_02 (38:58):
Okay, what does that mean?

SPEAKER_03 (38:59):
Currently, Memvid excels at storing and
compressing text documents, butsoon the plan is for it to
natively support a much widerrange of data types.
We're talking about images,audio clips, and even small
video files, all within itsversatile knowledge capsules.

SPEAKER_02 (39:12):
Wow, so truly multimedia.

SPEAKER_03 (39:14):
Exactly.
This would transform it into atruly universal, multimodal
knowledge capsule, capable ofholding diverse media types that
AI might need to understand andprocess simultaneously.
Imagine an AI having a knowledgebase that includes not just
written descriptions, but alsotechnobiograms, voice notes from
a user interview, and shortinstructional video snippets.

SPEAKER_02 (39:37):
All in one searchable file.

SPEAKER_03 (39:39):
All instantly searchable within a single file.
This is crucial for AIapplications that interact with
the rich, multimodal complexityof the real world.

SPEAKER_02 (39:47):
That would be a truly massive leap.
What about security, especiallywhen you're encapsulating so
much potentially sensitive datainto a single file?

SPEAKER_03 (39:55):
Security is is, as always, absolutely paramount,
particularly when dealing withproprietary or sensitive data.

SPEAKER_00 (40:01):
Has to be.

SPEAKER_03 (40:01):
Future versions of Memvid will include strong,
enterprise-grade encryptionmechanisms to keep your
sensitive data entirely safe andsecure within these video files.
This means that even if a Memvidfile were to fall into the wrong
hands, the information containedwithin it would be completely
unreadable without the correctdecryption key, adding a

(40:21):
critical, robust layer of dataprotection and ensuring
compliance Good.

SPEAKER_02 (40:28):
And if you're constantly adding new
information to your AI'sknowledge base, like new bug
reports or updated requirements,do you have to rebuild the
entire potentially massive videofile every single time?
That sounds like it could becomea significant bottleneck.

SPEAKER_03 (40:43):
That's a perceptive question, and it points to a
common challenge with any large,dynamically updated data set.
The developers are acutely awareof this and are actively
addressing it with a conceptthey call streaming ingest.

SPEAKER_02 (40:55):
Streaming ingest.
Okay.

SPEAKER_03 (40:56):
The goal here is to allow users to incrementally add
new information to an existingmemfid file without having to
rebuild the entire knowledgecapsule from scratch.

SPEAKER_02 (41:05):
Ah, so just append the new stuff.

SPEAKER_03 (41:07):
Essentially, yes.
This would make updatesdramatically faster and more
efficient, enabling dynamic,continuously updated AI
knowledge bases.
Think of an AI that's constantlylearning from real-time
operational data or live newsfeeds.
Streaming ingest would make thatfeasible.

SPEAKER_02 (41:22):
And finally, what about performance?
It's already remarkably fast,but can it get even quicker than
900 milliseconds?

SPEAKER_03 (41:30):
The answer is a resounding yes.
A new version of Memvid isactively being developed in the
programming language Rust.

SPEAKER_02 (41:37):
The Rust version again.

SPEAKER_03 (41:38):
Yep.
Rust is renowned across thesoftware development world for
its incredible speed, itsinherent memory safety, and its
raw performance, oftenoutperforming even older, highly
optimized languages like C++ incertain contexts due to its
rigorous compile time checks.
This new Rust version promiseseven quicker processing for both
the initial encoding of datainto the Memvid format and,

(42:00):
crucially, for subsequentretrieval.

SPEAKER_02 (42:02):
Faster still.

SPEAKER_03 (42:03):
And beyond pure speed, it's being designed as a
single, easy-to-use, highlyportable, executable file that
runs almost anywhere, furthersimplifying deployment,
cross-platform compatibility,and overall accessibility.

SPEAKER_04 (42:15):
Nice.

SPEAKER_03 (42:16):
These future features collectively broaden
Memvid's applicabilitydramatically, truly moving it
beyond just a clevertext-to-video trick and
solidifying its position Okay,so let's

SPEAKER_02 (42:33):
bring it all back to the big picture here and what
this all means for the future ofAI.
Memvid is undeniably challengingour conventional understanding
of how data for AI should bestored, accessed, and managed.

SPEAKER_03 (42:45):
It really flips the script.

SPEAKER_02 (42:46):
It does.
By cleverly, almost elegantly,using existing video technology
in a completely novel way, itoffers a powerful, inclusive,
and incredibly affordable andremarkably portable solution for
AI memory management.
It's almost deceptively simplein its core approach, yet its
implications are truly profound,especially for accessibility and

(43:07):
cost.

SPEAKER_03 (43:08):
It absolutely is.
While it's still an emergingopen source technology, it's
proven capabilities from thatinitial eye-opening experiment.

SPEAKER_02 (43:15):
The 10,000 PDF one?

SPEAKER_03 (43:16):
Right.
Combined with its ambitious andwell-articulated future plans,
position it as a significantdisruptor in the evolving AI
landscape.
It's poised to accelerate thewidespread adoption of AI,
especially inresource-constrained
environments where traditionalGPU-heavy cloud-dependent setups
are simply not economically orpractically feasible.

SPEAKER_02 (43:37):
Opening doors for more people.

SPEAKER_03 (43:38):
Exactly.
And as we've thoroughly exploredtoday, it promises to
fundamentally enhance theintelligence, efficiency, and
robustness of AI softwaretesting platforms by providing
them with powerful, accessible,and reliably version-controlled
knowledge bases, which is acritical missing piece for many
organizations.

SPEAKER_02 (43:57):
It's truly incredible how something that
quite literally started as ameme on Reddit, a concept
dismissed by some as a quirkyinternet novelty.

SPEAKER_03 (44:04):
Yeah, hard to believe sometimes.

SPEAKER_02 (44:05):
It's so rapidly becoming a serious, legitimate
player in the mainstream AIworld.
It's definitely a technologyworth keeping a very close eye
on because it could truly shiftthe paradigm for how AI is built
and deployed.

SPEAKER_03 (44:17):
Indeed.
We've seen how it works, itssurprising performance, and its
monumental potential impactacross various sectors.
Now, we We want to leave you,our listener, with a thought to
genuinely ponder after this deepdive.

SPEAKER_02 (44:30):
Okay, let's hear

SPEAKER_03 (44:31):
it.
Could this unconventional videoas database approach, this
knowledge capsule concept,fundamentally transform your own
AI projects, or perhapsstreamline and enhance your
testing workflows?
How do you currently manage thevast amounts of knowledge and
context your AI systems need,and what are the current pain
points or limitations you face?

(44:52):
Consider the profound potentialfor such a fundamental That's a
fantastic,

SPEAKER_02 (45:03):
truly provocative question to mull over.
Thank you so much for joining uson this deep dive into MemBid
and the future of AI memory.

SPEAKER_03 (45:09):
Been a pleasure.

SPEAKER_02 (45:11):
We sincerely hope you've gained some surprising
insights and are as excitedabout these technological shifts
as we are.
Keep learning, keep exploring,and we'll catch you on the next
deep dive.

SPEAKER_00 (45:34):
This audio was created with Podcastle.ai.
Advertise With Us

Popular Podcasts

Las Culturistas with Matt Rogers and Bowen Yang

Las Culturistas with Matt Rogers and Bowen Yang

Ding dong! Join your culture consultants, Matt Rogers and Bowen Yang, on an unforgettable journey into the beating heart of CULTURE. Alongside sizzling special guests, they GET INTO the hottest pop-culture moments of the day and the formative cultural experiences that turned them into Culturistas. Produced by the Big Money Players Network and iHeartRadio.

On Purpose with Jay Shetty

On Purpose with Jay Shetty

I’m Jay Shetty host of On Purpose the worlds #1 Mental Health podcast and I’m so grateful you found us. I started this podcast 5 years ago to invite you into conversations and workshops that are designed to help make you happier, healthier and more healed. I believe that when you (yes you) feel seen, heard and understood you’re able to deal with relationship struggles, work challenges and life’s ups and downs with more ease and grace. I interview experts, celebrities, thought leaders and athletes so that we can grow our mindset, build better habits and uncover a side of them we’ve never seen before. New episodes every Monday and Friday. Your support means the world to me and I don’t take it for granted — click the follow button and leave a review to help us spread the love with On Purpose. I can’t wait for you to listen to your first or 500th episode!

Crime Junkie

Crime Junkie

Does hearing about a true crime case always leave you scouring the internet for the truth behind the story? Dive into your next mystery with Crime Junkie. Every Monday, join your host Ashley Flowers as she unravels all the details of infamous and underreported true crime cases with her best friend Brit Prawat. From cold cases to missing persons and heroes in our community who seek justice, Crime Junkie is your destination for theories and stories you won’t hear anywhere else. Whether you're a seasoned true crime enthusiast or new to the genre, you'll find yourself on the edge of your seat awaiting a new episode every Monday. If you can never get enough true crime... Congratulations, you’ve found your people. Follow to join a community of Crime Junkies! Crime Junkie is presented by audiochuck Media Company.

Music, radio and podcasts, all free. Listen online or download the iHeart App.

Connect

© 2025 iHeartMedia, Inc.